00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 631 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3297 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.367 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.378 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.388 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.388 > git config core.sparsecheckout # timeout=10 00:00:06.398 > git read-tree -mu HEAD # timeout=10 00:00:06.411 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.432 Commit message: "packer: Add bios builder" 00:00:06.432 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.505 [Pipeline] Start of Pipeline 00:00:06.517 [Pipeline] library 00:00:06.519 Loading library shm_lib@master 00:00:06.519 Library shm_lib@master is cached. Copying from home. 00:00:06.538 [Pipeline] node 00:00:06.545 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.549 [Pipeline] { 00:00:06.561 [Pipeline] catchError 00:00:06.562 [Pipeline] { 00:00:06.573 [Pipeline] wrap 00:00:06.581 [Pipeline] { 00:00:06.588 [Pipeline] stage 00:00:06.590 [Pipeline] { (Prologue) 00:00:06.752 [Pipeline] sh 00:00:07.041 + logger -p user.info -t JENKINS-CI 00:00:07.059 [Pipeline] echo 00:00:07.061 Node: CYP9 00:00:07.069 [Pipeline] sh 00:00:07.386 [Pipeline] setCustomBuildProperty 00:00:07.402 [Pipeline] echo 00:00:07.405 Cleanup processes 00:00:07.412 [Pipeline] sh 00:00:07.702 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.702 632180 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.718 [Pipeline] sh 00:00:08.009 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.009 ++ grep -v 'sudo pgrep' 00:00:08.009 ++ awk '{print $1}' 00:00:08.009 + sudo kill -9 00:00:08.009 + true 00:00:08.026 [Pipeline] cleanWs 00:00:08.038 [WS-CLEANUP] Deleting project workspace... 00:00:08.038 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.046 [WS-CLEANUP] done 00:00:08.051 [Pipeline] setCustomBuildProperty 00:00:08.068 [Pipeline] sh 00:00:08.390 + sudo git config --global --replace-all safe.directory '*' 00:00:08.487 [Pipeline] httpRequest 00:00:08.523 [Pipeline] echo 00:00:08.525 Sorcerer 10.211.164.101 is alive 00:00:08.533 [Pipeline] httpRequest 00:00:08.538 HttpMethod: GET 00:00:08.539 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.539 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.559 Response Code: HTTP/1.1 200 OK 00:00:08.559 Success: Status code 200 is in the accepted range: 200,404 00:00:08.560 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:29.018 [Pipeline] sh 00:00:29.305 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:29.323 [Pipeline] httpRequest 00:00:29.347 [Pipeline] echo 00:00:29.349 Sorcerer 10.211.164.101 is alive 00:00:29.360 [Pipeline] httpRequest 00:00:29.366 HttpMethod: GET 00:00:29.366 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:29.367 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:29.385 Response Code: HTTP/1.1 200 OK 00:00:29.386 Success: Status code 200 is in the accepted range: 200,404 00:00:29.387 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:08.880 [Pipeline] sh 00:01:09.168 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:11.729 [Pipeline] sh 00:01:12.017 + git -C spdk log --oneline -n5 00:01:12.018 dbef7efac test: fix dpdk builds on ubuntu24 00:01:12.018 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:12.018 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:12.018 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:12.018 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:12.039 [Pipeline] withCredentials 00:01:12.052 > git --version # timeout=10 00:01:12.066 > git --version # 'git version 2.39.2' 00:01:12.089 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:12.091 [Pipeline] { 00:01:12.101 [Pipeline] retry 00:01:12.103 [Pipeline] { 00:01:12.121 [Pipeline] sh 00:01:12.414 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:17.723 [Pipeline] } 00:01:17.747 [Pipeline] // retry 00:01:17.753 [Pipeline] } 00:01:17.774 [Pipeline] // withCredentials 00:01:17.785 [Pipeline] httpRequest 00:01:17.804 [Pipeline] echo 00:01:17.805 Sorcerer 10.211.164.101 is alive 00:01:17.815 [Pipeline] httpRequest 00:01:17.820 HttpMethod: GET 00:01:17.821 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.821 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.824 Response Code: HTTP/1.1 200 OK 00:01:17.824 Success: Status code 200 is in the accepted range: 200,404 00:01:17.825 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:23.955 [Pipeline] sh 00:01:24.243 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.172 [Pipeline] sh 00:01:26.487 + git -C dpdk log --oneline -n5 00:01:26.487 eeb0605f11 version: 23.11.0 00:01:26.487 238778122a doc: update release notes for 23.11 00:01:26.487 46aa6b3cfc doc: fix description of RSS features 00:01:26.487 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:26.487 7e421ae345 devtools: support skipping forbid rule check 00:01:26.499 [Pipeline] } 00:01:26.515 [Pipeline] // stage 00:01:26.523 [Pipeline] stage 00:01:26.525 [Pipeline] { (Prepare) 00:01:26.546 [Pipeline] writeFile 00:01:26.562 [Pipeline] sh 00:01:26.847 + logger -p user.info -t JENKINS-CI 00:01:26.859 [Pipeline] sh 00:01:27.143 + logger -p user.info -t JENKINS-CI 00:01:27.157 [Pipeline] sh 00:01:27.445 + cat autorun-spdk.conf 00:01:27.445 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.445 SPDK_TEST_NVMF=1 00:01:27.445 SPDK_TEST_NVME_CLI=1 00:01:27.445 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.445 SPDK_TEST_NVMF_NICS=e810 00:01:27.445 SPDK_TEST_VFIOUSER=1 00:01:27.445 SPDK_RUN_UBSAN=1 00:01:27.445 NET_TYPE=phy 00:01:27.445 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.445 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.454 RUN_NIGHTLY=1 00:01:27.458 [Pipeline] readFile 00:01:27.486 [Pipeline] withEnv 00:01:27.488 [Pipeline] { 00:01:27.501 [Pipeline] sh 00:01:27.787 + set -ex 00:01:27.787 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.787 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.787 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.787 ++ SPDK_TEST_NVMF=1 00:01:27.787 ++ SPDK_TEST_NVME_CLI=1 00:01:27.787 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.787 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.787 ++ SPDK_TEST_VFIOUSER=1 00:01:27.787 ++ SPDK_RUN_UBSAN=1 00:01:27.787 ++ NET_TYPE=phy 00:01:27.787 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.787 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.787 ++ RUN_NIGHTLY=1 00:01:27.787 + case $SPDK_TEST_NVMF_NICS in 00:01:27.787 + DRIVERS=ice 00:01:27.787 + [[ tcp == \r\d\m\a ]] 00:01:27.787 + [[ -n ice ]] 00:01:27.787 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.787 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.787 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.787 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.787 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.787 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.787 + true 00:01:27.787 + for D in $DRIVERS 00:01:27.787 + sudo modprobe ice 00:01:27.787 + exit 0 00:01:27.797 [Pipeline] } 00:01:27.815 [Pipeline] // withEnv 00:01:27.821 [Pipeline] } 00:01:27.838 [Pipeline] // stage 00:01:27.847 [Pipeline] catchError 00:01:27.848 [Pipeline] { 00:01:27.861 [Pipeline] timeout 00:01:27.862 Timeout set to expire in 50 min 00:01:27.863 [Pipeline] { 00:01:27.872 [Pipeline] stage 00:01:27.874 [Pipeline] { (Tests) 00:01:27.884 [Pipeline] sh 00:01:28.169 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.169 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.169 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.169 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.169 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.169 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.169 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.169 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.169 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.169 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.169 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.169 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.169 + source /etc/os-release 00:01:28.169 ++ NAME='Fedora Linux' 00:01:28.169 ++ VERSION='38 (Cloud Edition)' 00:01:28.169 ++ ID=fedora 00:01:28.169 ++ VERSION_ID=38 00:01:28.169 ++ VERSION_CODENAME= 00:01:28.169 ++ PLATFORM_ID=platform:f38 00:01:28.169 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.169 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.169 ++ LOGO=fedora-logo-icon 00:01:28.169 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.169 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.169 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.169 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.169 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.169 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.169 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.169 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.169 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.169 ++ SUPPORT_END=2024-05-14 00:01:28.169 ++ VARIANT='Cloud Edition' 00:01:28.169 ++ VARIANT_ID=cloud 00:01:28.169 + uname -a 00:01:28.169 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.169 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:30.719 Hugepages 00:01:30.719 node hugesize free / total 00:01:30.719 node0 1048576kB 0 / 0 00:01:30.719 node0 2048kB 0 / 0 00:01:30.719 node1 1048576kB 0 / 0 00:01:30.719 node1 2048kB 0 / 0 00:01:30.719 00:01:30.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.719 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:30.719 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:30.980 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:30.980 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:30.980 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:30.980 + rm -f /tmp/spdk-ld-path 00:01:30.981 + source autorun-spdk.conf 00:01:30.981 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.981 ++ SPDK_TEST_NVMF=1 00:01:30.981 ++ SPDK_TEST_NVME_CLI=1 00:01:30.981 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.981 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.981 ++ SPDK_TEST_VFIOUSER=1 00:01:30.981 ++ SPDK_RUN_UBSAN=1 00:01:30.981 ++ NET_TYPE=phy 00:01:30.981 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:30.981 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.981 ++ RUN_NIGHTLY=1 00:01:30.981 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.981 + [[ -n '' ]] 00:01:30.981 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.981 + for M in /var/spdk/build-*-manifest.txt 00:01:30.981 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.981 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.981 + for M in /var/spdk/build-*-manifest.txt 00:01:30.981 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.981 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.981 ++ uname 00:01:30.981 + [[ Linux == \L\i\n\u\x ]] 00:01:30.981 + sudo dmesg -T 00:01:30.981 + sudo dmesg --clear 00:01:30.981 + dmesg_pid=633745 00:01:30.981 + [[ Fedora Linux == FreeBSD ]] 00:01:30.981 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.981 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.981 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.981 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:30.981 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:30.981 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.981 + sudo dmesg -Tw 00:01:30.981 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.981 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.981 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.981 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.981 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.981 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.981 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.981 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.981 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.981 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.981 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.981 Test configuration: 00:01:30.981 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.981 SPDK_TEST_NVMF=1 00:01:30.981 SPDK_TEST_NVME_CLI=1 00:01:30.981 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.981 SPDK_TEST_NVMF_NICS=e810 00:01:30.981 SPDK_TEST_VFIOUSER=1 00:01:30.981 SPDK_RUN_UBSAN=1 00:01:30.981 NET_TYPE=phy 00:01:30.981 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:30.981 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.243 RUN_NIGHTLY=1 13:12:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:31.243 13:12:28 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.243 13:12:28 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.243 13:12:28 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.243 13:12:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.243 13:12:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.243 13:12:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.243 13:12:28 -- paths/export.sh@5 -- $ export PATH 00:01:31.243 13:12:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.243 13:12:28 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:31.243 13:12:28 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:31.243 13:12:28 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721992348.XXXXXX 00:01:31.243 13:12:28 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721992348.riAd6h 00:01:31.243 13:12:28 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.243 13:12:28 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:31.243 13:12:28 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:31.243 13:12:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.243 13:12:28 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:31.243 13:12:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.243 13:12:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.243 13:12:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.243 13:12:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.243 Fri Jul 26 11:12:28 AM UTC 2024 00:01:31.243 13:12:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.243 LTS-60-gdbef7efac 00:01:31.243 13:12:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.243 13:12:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.243 13:12:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.243 13:12:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:31.243 13:12:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:31.243 13:12:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.243 ************************************ 00:01:31.243 START TEST ubsan 00:01:31.243 ************************************ 00:01:31.243 13:12:28 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:31.243 using ubsan 00:01:31.243 00:01:31.243 real 0m0.001s 00:01:31.243 user 0m0.000s 00:01:31.243 sys 0m0.000s 00:01:31.243 13:12:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.243 13:12:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.243 ************************************ 00:01:31.243 END TEST ubsan 00:01:31.243 ************************************ 00:01:31.243 13:12:28 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:31.243 13:12:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:31.243 13:12:28 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:31.243 13:12:28 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:31.243 13:12:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:31.243 13:12:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.243 ************************************ 00:01:31.243 START TEST build_native_dpdk 00:01:31.243 ************************************ 00:01:31.243 13:12:28 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:31.243 13:12:28 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:31.243 13:12:28 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:31.243 13:12:28 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:31.243 13:12:28 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:31.243 13:12:28 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:31.243 13:12:28 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:31.243 13:12:28 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:31.243 13:12:28 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:31.243 13:12:28 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:31.243 13:12:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:31.243 13:12:28 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:31.243 13:12:28 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:31.243 13:12:28 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.243 13:12:28 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.243 13:12:28 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:31.243 13:12:28 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.243 13:12:28 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:31.243 eeb0605f11 version: 23.11.0 00:01:31.243 238778122a doc: update release notes for 23.11 00:01:31.243 46aa6b3cfc doc: fix description of RSS features 00:01:31.243 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.243 7e421ae345 devtools: support skipping forbid rule check 00:01:31.243 13:12:28 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:31.243 13:12:28 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:31.243 13:12:28 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:31.243 13:12:28 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:31.243 13:12:28 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:31.243 13:12:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:31.243 13:12:28 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:31.244 13:12:28 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:31.244 13:12:28 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:31.244 13:12:28 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:31.244 13:12:28 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:31.244 13:12:28 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:31.244 13:12:28 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:31.244 13:12:28 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:31.244 13:12:28 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:31.244 13:12:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:31.244 13:12:28 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:31.244 13:12:28 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:31.244 13:12:28 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:31.244 13:12:28 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:31.244 13:12:28 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:31.244 13:12:28 -- scripts/common.sh@343 -- $ case "$op" in 00:01:31.244 13:12:28 -- scripts/common.sh@344 -- $ : 1 00:01:31.244 13:12:28 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:31.244 13:12:28 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:31.244 13:12:28 -- scripts/common.sh@364 -- $ decimal 23 00:01:31.244 13:12:28 -- scripts/common.sh@352 -- $ local d=23 00:01:31.244 13:12:28 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:31.244 13:12:28 -- scripts/common.sh@354 -- $ echo 23 00:01:31.244 13:12:28 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:31.244 13:12:28 -- scripts/common.sh@365 -- $ decimal 21 00:01:31.244 13:12:28 -- scripts/common.sh@352 -- $ local d=21 00:01:31.244 13:12:28 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:31.244 13:12:28 -- scripts/common.sh@354 -- $ echo 21 00:01:31.244 13:12:28 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:31.244 13:12:28 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:31.244 13:12:28 -- scripts/common.sh@366 -- $ return 1 00:01:31.244 13:12:28 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:31.244 patching file config/rte_config.h 00:01:31.244 Hunk #1 succeeded at 60 (offset 1 line). 00:01:31.244 13:12:28 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:31.244 13:12:28 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:31.244 13:12:28 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:31.244 13:12:28 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:31.244 13:12:28 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:31.244 13:12:28 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:31.244 13:12:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:31.244 13:12:28 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:31.244 13:12:28 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:31.244 13:12:28 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:31.244 13:12:28 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:31.244 13:12:28 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:31.244 13:12:28 -- scripts/common.sh@343 -- $ case "$op" in 00:01:31.244 13:12:28 -- scripts/common.sh@344 -- $ : 1 00:01:31.244 13:12:28 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:31.244 13:12:28 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:31.244 13:12:28 -- scripts/common.sh@364 -- $ decimal 23 00:01:31.244 13:12:28 -- scripts/common.sh@352 -- $ local d=23 00:01:31.244 13:12:28 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:31.244 13:12:28 -- scripts/common.sh@354 -- $ echo 23 00:01:31.244 13:12:28 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:31.244 13:12:28 -- scripts/common.sh@365 -- $ decimal 24 00:01:31.244 13:12:28 -- scripts/common.sh@352 -- $ local d=24 00:01:31.244 13:12:28 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:31.244 13:12:28 -- scripts/common.sh@354 -- $ echo 24 00:01:31.244 13:12:28 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:31.244 13:12:28 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:31.244 13:12:28 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:31.244 13:12:28 -- scripts/common.sh@367 -- $ return 0 00:01:31.244 13:12:28 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:31.244 patching file lib/pcapng/rte_pcapng.c 00:01:31.244 13:12:28 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:31.244 13:12:28 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:31.244 13:12:28 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:31.244 13:12:28 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:31.244 13:12:28 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.539 The Meson build system 00:01:36.539 Version: 1.3.1 00:01:36.539 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.539 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:36.539 Build type: native build 00:01:36.539 Program cat found: YES (/usr/bin/cat) 00:01:36.539 Project name: DPDK 00:01:36.539 Project version: 23.11.0 00:01:36.539 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.539 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:36.539 Host machine cpu family: x86_64 00:01:36.539 Host machine cpu: x86_64 00:01:36.539 Message: ## Building in Developer Mode ## 00:01:36.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.539 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:36.539 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.539 Program python3 found: YES (/usr/bin/python3) 00:01:36.539 Program cat found: YES (/usr/bin/cat) 00:01:36.539 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:36.539 Compiler for C supports arguments -march=native: YES 00:01:36.539 Checking for size of "void *" : 8 00:01:36.539 Checking for size of "void *" : 8 (cached) 00:01:36.539 Library m found: YES 00:01:36.539 Library numa found: YES 00:01:36.539 Has header "numaif.h" : YES 00:01:36.539 Library fdt found: NO 00:01:36.539 Library execinfo found: NO 00:01:36.539 Has header "execinfo.h" : YES 00:01:36.539 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.539 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.539 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.539 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.539 Run-time dependency openssl found: YES 3.0.9 00:01:36.539 Run-time dependency libpcap found: YES 1.10.4 00:01:36.539 Has header "pcap.h" with dependency libpcap: YES 00:01:36.539 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.539 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.539 Compiler for C supports arguments -Wformat: YES 00:01:36.539 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.539 Compiler for C supports arguments -Wformat-security: NO 00:01:36.539 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.539 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.539 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.539 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.539 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.539 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.539 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.539 Compiler for C supports arguments -Wundef: YES 00:01:36.539 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.539 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.539 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.539 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.539 Program objdump found: YES (/usr/bin/objdump) 00:01:36.539 Compiler for C supports arguments -mavx512f: YES 00:01:36.539 Checking if "AVX512 checking" compiles: YES 00:01:36.539 Fetching value of define "__SSE4_2__" : 1 00:01:36.539 Fetching value of define "__AES__" : 1 00:01:36.539 Fetching value of define "__AVX__" : 1 00:01:36.539 Fetching value of define "__AVX2__" : 1 00:01:36.539 Fetching value of define "__AVX512BW__" : 1 00:01:36.539 Fetching value of define "__AVX512CD__" : 1 00:01:36.539 Fetching value of define "__AVX512DQ__" : 1 00:01:36.539 Fetching value of define "__AVX512F__" : 1 00:01:36.539 Fetching value of define "__AVX512VL__" : 1 00:01:36.539 Fetching value of define "__PCLMUL__" : 1 00:01:36.539 Fetching value of define "__RDRND__" : 1 00:01:36.539 Fetching value of define "__RDSEED__" : 1 00:01:36.539 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:36.539 Fetching value of define "__znver1__" : (undefined) 00:01:36.539 Fetching value of define "__znver2__" : (undefined) 00:01:36.539 Fetching value of define "__znver3__" : (undefined) 00:01:36.539 Fetching value of define "__znver4__" : (undefined) 00:01:36.539 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.539 Message: lib/log: Defining dependency "log" 00:01:36.539 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.539 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.539 Checking for function "getentropy" : NO 00:01:36.539 Message: lib/eal: Defining dependency "eal" 00:01:36.539 Message: lib/ring: Defining dependency "ring" 00:01:36.539 Message: lib/rcu: Defining dependency "rcu" 00:01:36.539 Message: lib/mempool: Defining dependency "mempool" 00:01:36.539 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.539 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.539 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:36.539 Compiler for C supports arguments -mpclmul: YES 00:01:36.539 Compiler for C supports arguments -maes: YES 00:01:36.539 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.539 Compiler for C supports arguments -mavx512bw: YES 00:01:36.539 Compiler for C supports arguments -mavx512dq: YES 00:01:36.539 Compiler for C supports arguments -mavx512vl: YES 00:01:36.539 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.539 Compiler for C supports arguments -mavx2: YES 00:01:36.539 Compiler for C supports arguments -mavx: YES 00:01:36.539 Message: lib/net: Defining dependency "net" 00:01:36.539 Message: lib/meter: Defining dependency "meter" 00:01:36.539 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.539 Message: lib/pci: Defining dependency "pci" 00:01:36.539 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.539 Message: lib/metrics: Defining dependency "metrics" 00:01:36.539 Message: lib/hash: Defining dependency "hash" 00:01:36.539 Message: lib/timer: Defining dependency "timer" 00:01:36.539 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:36.539 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.539 Message: lib/acl: Defining dependency "acl" 00:01:36.539 Message: lib/bbdev: Defining dependency "bbdev" 00:01:36.539 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:36.539 Run-time dependency libelf found: YES 0.190 00:01:36.540 Message: lib/bpf: Defining dependency "bpf" 00:01:36.540 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:36.540 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.540 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.540 Message: lib/distributor: Defining dependency "distributor" 00:01:36.540 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.540 Message: lib/efd: Defining dependency "efd" 00:01:36.540 Message: lib/eventdev: Defining dependency "eventdev" 00:01:36.540 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:36.540 Message: lib/gpudev: Defining dependency "gpudev" 00:01:36.540 Message: lib/gro: Defining dependency "gro" 00:01:36.540 Message: lib/gso: Defining dependency "gso" 00:01:36.540 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:36.540 Message: lib/jobstats: Defining dependency "jobstats" 00:01:36.540 Message: lib/latencystats: Defining dependency "latencystats" 00:01:36.540 Message: lib/lpm: Defining dependency "lpm" 00:01:36.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.540 Fetching value of define "__AVX512IFMA__" : 1 00:01:36.540 Message: lib/member: Defining dependency "member" 00:01:36.540 Message: lib/pcapng: Defining dependency "pcapng" 00:01:36.540 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.540 Message: lib/power: Defining dependency "power" 00:01:36.540 Message: lib/rawdev: Defining dependency "rawdev" 00:01:36.540 Message: lib/regexdev: Defining dependency "regexdev" 00:01:36.540 Message: lib/mldev: Defining dependency "mldev" 00:01:36.540 Message: lib/rib: Defining dependency "rib" 00:01:36.540 Message: lib/reorder: Defining dependency "reorder" 00:01:36.540 Message: lib/sched: Defining dependency "sched" 00:01:36.540 Message: lib/security: Defining dependency "security" 00:01:36.540 Message: lib/stack: Defining dependency "stack" 00:01:36.540 Has header "linux/userfaultfd.h" : YES 00:01:36.540 Has header "linux/vduse.h" : YES 00:01:36.540 Message: lib/vhost: Defining dependency "vhost" 00:01:36.540 Message: lib/ipsec: Defining dependency "ipsec" 00:01:36.540 Message: lib/pdcp: Defining dependency "pdcp" 00:01:36.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.540 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.540 Message: lib/fib: Defining dependency "fib" 00:01:36.540 Message: lib/port: Defining dependency "port" 00:01:36.540 Message: lib/pdump: Defining dependency "pdump" 00:01:36.540 Message: lib/table: Defining dependency "table" 00:01:36.540 Message: lib/pipeline: Defining dependency "pipeline" 00:01:36.540 Message: lib/graph: Defining dependency "graph" 00:01:36.540 Message: lib/node: Defining dependency "node" 00:01:36.540 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.484 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.484 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:37.484 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.484 Compiler for C supports arguments -Wno-format: YES 00:01:37.484 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.484 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.485 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.485 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.485 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.485 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.485 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.485 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.485 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.485 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.485 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.485 Has header "sys/epoll.h" : YES 00:01:37.485 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.485 Configuring doxy-api-html.conf using configuration 00:01:37.485 Configuring doxy-api-man.conf using configuration 00:01:37.485 Program mandb found: YES (/usr/bin/mandb) 00:01:37.485 Program sphinx-build found: NO 00:01:37.485 Configuring rte_build_config.h using configuration 00:01:37.485 Message: 00:01:37.485 ================= 00:01:37.485 Applications Enabled 00:01:37.485 ================= 00:01:37.485 00:01:37.485 apps: 00:01:37.485 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:37.485 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:37.485 test-pmd, test-regex, test-sad, test-security-perf, 00:01:37.485 00:01:37.485 Message: 00:01:37.485 ================= 00:01:37.485 Libraries Enabled 00:01:37.485 ================= 00:01:37.485 00:01:37.485 libs: 00:01:37.485 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.485 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:37.485 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:37.485 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:37.485 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:37.485 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:37.485 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:37.485 00:01:37.485 00:01:37.485 Message: 00:01:37.485 =============== 00:01:37.485 Drivers Enabled 00:01:37.485 =============== 00:01:37.485 00:01:37.485 common: 00:01:37.485 00:01:37.485 bus: 00:01:37.485 pci, vdev, 00:01:37.485 mempool: 00:01:37.485 ring, 00:01:37.485 dma: 00:01:37.485 00:01:37.485 net: 00:01:37.485 i40e, 00:01:37.485 raw: 00:01:37.485 00:01:37.485 crypto: 00:01:37.485 00:01:37.485 compress: 00:01:37.485 00:01:37.485 regex: 00:01:37.485 00:01:37.485 ml: 00:01:37.485 00:01:37.485 vdpa: 00:01:37.485 00:01:37.485 event: 00:01:37.485 00:01:37.485 baseband: 00:01:37.485 00:01:37.485 gpu: 00:01:37.485 00:01:37.485 00:01:37.485 Message: 00:01:37.485 ================= 00:01:37.485 Content Skipped 00:01:37.485 ================= 00:01:37.485 00:01:37.485 apps: 00:01:37.485 00:01:37.485 libs: 00:01:37.485 00:01:37.485 drivers: 00:01:37.485 common/cpt: not in enabled drivers build config 00:01:37.485 common/dpaax: not in enabled drivers build config 00:01:37.485 common/iavf: not in enabled drivers build config 00:01:37.485 common/idpf: not in enabled drivers build config 00:01:37.485 common/mvep: not in enabled drivers build config 00:01:37.485 common/octeontx: not in enabled drivers build config 00:01:37.485 bus/auxiliary: not in enabled drivers build config 00:01:37.485 bus/cdx: not in enabled drivers build config 00:01:37.485 bus/dpaa: not in enabled drivers build config 00:01:37.485 bus/fslmc: not in enabled drivers build config 00:01:37.485 bus/ifpga: not in enabled drivers build config 00:01:37.485 bus/platform: not in enabled drivers build config 00:01:37.485 bus/vmbus: not in enabled drivers build config 00:01:37.485 common/cnxk: not in enabled drivers build config 00:01:37.485 common/mlx5: not in enabled drivers build config 00:01:37.485 common/nfp: not in enabled drivers build config 00:01:37.485 common/qat: not in enabled drivers build config 00:01:37.485 common/sfc_efx: not in enabled drivers build config 00:01:37.485 mempool/bucket: not in enabled drivers build config 00:01:37.485 mempool/cnxk: not in enabled drivers build config 00:01:37.485 mempool/dpaa: not in enabled drivers build config 00:01:37.485 mempool/dpaa2: not in enabled drivers build config 00:01:37.485 mempool/octeontx: not in enabled drivers build config 00:01:37.485 mempool/stack: not in enabled drivers build config 00:01:37.485 dma/cnxk: not in enabled drivers build config 00:01:37.485 dma/dpaa: not in enabled drivers build config 00:01:37.485 dma/dpaa2: not in enabled drivers build config 00:01:37.485 dma/hisilicon: not in enabled drivers build config 00:01:37.485 dma/idxd: not in enabled drivers build config 00:01:37.485 dma/ioat: not in enabled drivers build config 00:01:37.485 dma/skeleton: not in enabled drivers build config 00:01:37.485 net/af_packet: not in enabled drivers build config 00:01:37.485 net/af_xdp: not in enabled drivers build config 00:01:37.485 net/ark: not in enabled drivers build config 00:01:37.485 net/atlantic: not in enabled drivers build config 00:01:37.485 net/avp: not in enabled drivers build config 00:01:37.485 net/axgbe: not in enabled drivers build config 00:01:37.485 net/bnx2x: not in enabled drivers build config 00:01:37.485 net/bnxt: not in enabled drivers build config 00:01:37.485 net/bonding: not in enabled drivers build config 00:01:37.485 net/cnxk: not in enabled drivers build config 00:01:37.485 net/cpfl: not in enabled drivers build config 00:01:37.485 net/cxgbe: not in enabled drivers build config 00:01:37.485 net/dpaa: not in enabled drivers build config 00:01:37.485 net/dpaa2: not in enabled drivers build config 00:01:37.485 net/e1000: not in enabled drivers build config 00:01:37.485 net/ena: not in enabled drivers build config 00:01:37.485 net/enetc: not in enabled drivers build config 00:01:37.485 net/enetfec: not in enabled drivers build config 00:01:37.485 net/enic: not in enabled drivers build config 00:01:37.485 net/failsafe: not in enabled drivers build config 00:01:37.485 net/fm10k: not in enabled drivers build config 00:01:37.485 net/gve: not in enabled drivers build config 00:01:37.485 net/hinic: not in enabled drivers build config 00:01:37.485 net/hns3: not in enabled drivers build config 00:01:37.485 net/iavf: not in enabled drivers build config 00:01:37.485 net/ice: not in enabled drivers build config 00:01:37.485 net/idpf: not in enabled drivers build config 00:01:37.485 net/igc: not in enabled drivers build config 00:01:37.485 net/ionic: not in enabled drivers build config 00:01:37.485 net/ipn3ke: not in enabled drivers build config 00:01:37.485 net/ixgbe: not in enabled drivers build config 00:01:37.485 net/mana: not in enabled drivers build config 00:01:37.485 net/memif: not in enabled drivers build config 00:01:37.485 net/mlx4: not in enabled drivers build config 00:01:37.485 net/mlx5: not in enabled drivers build config 00:01:37.485 net/mvneta: not in enabled drivers build config 00:01:37.485 net/mvpp2: not in enabled drivers build config 00:01:37.485 net/netvsc: not in enabled drivers build config 00:01:37.485 net/nfb: not in enabled drivers build config 00:01:37.485 net/nfp: not in enabled drivers build config 00:01:37.485 net/ngbe: not in enabled drivers build config 00:01:37.485 net/null: not in enabled drivers build config 00:01:37.485 net/octeontx: not in enabled drivers build config 00:01:37.485 net/octeon_ep: not in enabled drivers build config 00:01:37.485 net/pcap: not in enabled drivers build config 00:01:37.485 net/pfe: not in enabled drivers build config 00:01:37.485 net/qede: not in enabled drivers build config 00:01:37.485 net/ring: not in enabled drivers build config 00:01:37.485 net/sfc: not in enabled drivers build config 00:01:37.485 net/softnic: not in enabled drivers build config 00:01:37.485 net/tap: not in enabled drivers build config 00:01:37.485 net/thunderx: not in enabled drivers build config 00:01:37.485 net/txgbe: not in enabled drivers build config 00:01:37.485 net/vdev_netvsc: not in enabled drivers build config 00:01:37.485 net/vhost: not in enabled drivers build config 00:01:37.485 net/virtio: not in enabled drivers build config 00:01:37.485 net/vmxnet3: not in enabled drivers build config 00:01:37.485 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.485 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.485 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.485 raw/ifpga: not in enabled drivers build config 00:01:37.485 raw/ntb: not in enabled drivers build config 00:01:37.485 raw/skeleton: not in enabled drivers build config 00:01:37.485 crypto/armv8: not in enabled drivers build config 00:01:37.485 crypto/bcmfs: not in enabled drivers build config 00:01:37.485 crypto/caam_jr: not in enabled drivers build config 00:01:37.485 crypto/ccp: not in enabled drivers build config 00:01:37.485 crypto/cnxk: not in enabled drivers build config 00:01:37.485 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.485 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.485 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.485 crypto/mlx5: not in enabled drivers build config 00:01:37.485 crypto/mvsam: not in enabled drivers build config 00:01:37.485 crypto/nitrox: not in enabled drivers build config 00:01:37.485 crypto/null: not in enabled drivers build config 00:01:37.485 crypto/octeontx: not in enabled drivers build config 00:01:37.485 crypto/openssl: not in enabled drivers build config 00:01:37.485 crypto/scheduler: not in enabled drivers build config 00:01:37.485 crypto/uadk: not in enabled drivers build config 00:01:37.485 crypto/virtio: not in enabled drivers build config 00:01:37.485 compress/isal: not in enabled drivers build config 00:01:37.485 compress/mlx5: not in enabled drivers build config 00:01:37.485 compress/octeontx: not in enabled drivers build config 00:01:37.485 compress/zlib: not in enabled drivers build config 00:01:37.485 regex/mlx5: not in enabled drivers build config 00:01:37.485 regex/cn9k: not in enabled drivers build config 00:01:37.486 ml/cnxk: not in enabled drivers build config 00:01:37.486 vdpa/ifc: not in enabled drivers build config 00:01:37.486 vdpa/mlx5: not in enabled drivers build config 00:01:37.486 vdpa/nfp: not in enabled drivers build config 00:01:37.486 vdpa/sfc: not in enabled drivers build config 00:01:37.486 event/cnxk: not in enabled drivers build config 00:01:37.486 event/dlb2: not in enabled drivers build config 00:01:37.486 event/dpaa: not in enabled drivers build config 00:01:37.486 event/dpaa2: not in enabled drivers build config 00:01:37.486 event/dsw: not in enabled drivers build config 00:01:37.486 event/opdl: not in enabled drivers build config 00:01:37.486 event/skeleton: not in enabled drivers build config 00:01:37.486 event/sw: not in enabled drivers build config 00:01:37.486 event/octeontx: not in enabled drivers build config 00:01:37.486 baseband/acc: not in enabled drivers build config 00:01:37.486 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.486 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.486 baseband/la12xx: not in enabled drivers build config 00:01:37.486 baseband/null: not in enabled drivers build config 00:01:37.486 baseband/turbo_sw: not in enabled drivers build config 00:01:37.486 gpu/cuda: not in enabled drivers build config 00:01:37.486 00:01:37.486 00:01:37.486 Build targets in project: 215 00:01:37.486 00:01:37.486 DPDK 23.11.0 00:01:37.486 00:01:37.486 User defined options 00:01:37.486 libdir : lib 00:01:37.486 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.486 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.486 c_link_args : 00:01:37.486 enable_docs : false 00:01:37.486 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.486 enable_kmods : false 00:01:37.486 machine : native 00:01:37.486 tests : false 00:01:37.486 00:01:37.486 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.486 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.486 13:12:34 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:37.486 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.752 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.752 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.752 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.752 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.752 [5/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.752 [6/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.752 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.752 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.752 [9/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.752 [10/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.752 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.015 [12/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.015 [13/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:38.015 [14/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.015 [15/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.015 [16/705] Linking static target lib/librte_log.a 00:01:38.015 [17/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.015 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.015 [19/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.015 [20/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.015 [21/705] Linking static target lib/librte_kvargs.a 00:01:38.015 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.015 [23/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.015 [24/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.015 [25/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.015 [26/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.015 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.015 [28/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.015 [29/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.015 [30/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.015 [31/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.015 [32/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.015 [33/705] Linking static target lib/librte_pci.a 00:01:38.015 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.015 [35/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.275 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.275 [37/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.275 [38/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.275 [39/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.275 [40/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:38.275 [41/705] Linking static target lib/librte_cfgfile.a 00:01:38.275 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.275 [43/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.275 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.275 [45/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.536 [46/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.536 [47/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.536 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.536 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.536 [50/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.536 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.536 [52/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.536 [53/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.536 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.536 [55/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.536 [56/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.536 [57/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.536 [58/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.536 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.536 [60/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.536 [61/705] Linking static target lib/librte_meter.a 00:01:38.536 [62/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.536 [63/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.536 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.536 [65/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.536 [66/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.536 [67/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.536 [68/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.536 [69/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.536 [70/705] Linking static target lib/librte_bitratestats.a 00:01:38.536 [71/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.536 [72/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:38.536 [73/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.536 [74/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.536 [75/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.536 [76/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:38.536 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.536 [78/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.536 [79/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.536 [80/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.536 [81/705] Linking static target lib/librte_ring.a 00:01:38.537 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.537 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.537 [84/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.537 [85/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.537 [86/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.537 [87/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.537 [88/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:38.537 [89/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:38.537 [90/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.537 [91/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.537 [92/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.537 [93/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.537 [94/705] Linking static target lib/librte_cmdline.a 00:01:38.537 [95/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.537 [96/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.537 [97/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:38.537 [98/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.537 [99/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:38.537 [100/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.537 [101/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.537 [102/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.537 [103/705] Linking static target lib/librte_metrics.a 00:01:38.537 [104/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.537 [105/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.537 [106/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.537 [107/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.832 [108/705] Linking target lib/librte_log.so.24.0 00:01:38.832 [109/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.832 [110/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.832 [111/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:38.832 [112/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:38.832 [113/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.832 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.832 [115/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.832 [116/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:38.832 [117/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.832 [118/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.832 [119/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.832 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.832 [121/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.832 [122/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.832 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.832 [124/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:38.832 [125/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.832 [126/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:38.832 [127/705] Linking static target lib/librte_net.a 00:01:38.832 [128/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.832 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.832 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.832 [131/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:38.832 [132/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:38.832 [133/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.832 [134/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.832 [135/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.832 [136/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:38.832 [137/705] Linking static target lib/librte_compressdev.a 00:01:38.832 [138/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.832 [139/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.832 [140/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.832 [141/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:38.832 [142/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.832 [143/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.832 [144/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.832 [145/705] Linking target lib/librte_kvargs.so.24.0 00:01:38.832 [146/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:38.832 [147/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.108 [148/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:39.108 [149/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.108 [150/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.108 [151/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.108 [152/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.108 [153/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.108 [154/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:39.108 [155/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:39.108 [156/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.108 [157/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.108 [158/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.108 [159/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.108 [160/705] Linking static target lib/librte_dispatcher.a 00:01:39.108 [161/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:39.108 [162/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.108 [163/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.108 [164/705] Linking static target lib/librte_timer.a 00:01:39.108 [165/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:39.108 [166/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:39.108 [167/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:39.108 [168/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:39.108 [169/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:39.108 [170/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:39.108 [171/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:39.108 [172/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:39.108 [173/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.108 [174/705] Linking static target lib/librte_gro.a 00:01:39.108 [175/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:39.108 [176/705] Linking static target lib/librte_jobstats.a 00:01:39.108 [177/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:39.108 [178/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.108 [179/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:39.108 [180/705] Linking static target lib/librte_gpudev.a 00:01:39.108 [181/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.108 [182/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.108 [183/705] Linking static target lib/librte_bbdev.a 00:01:39.108 [184/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:39.108 [185/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.108 [186/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.108 [187/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.108 [188/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.108 [189/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.108 [190/705] Linking static target lib/librte_dmadev.a 00:01:39.108 [191/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:39.108 [192/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:39.108 [193/705] Linking static target lib/librte_mempool.a 00:01:39.108 [194/705] Linking static target lib/librte_distributor.a 00:01:39.108 [195/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:39.108 [196/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.108 [197/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:39.108 [198/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.367 [199/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:39.367 [200/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:39.367 [201/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:39.367 [202/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:39.367 [203/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:39.367 [204/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:39.367 [205/705] Linking static target lib/librte_stack.a 00:01:39.367 [206/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:39.367 [207/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.367 [208/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:39.367 [209/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.367 [210/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:39.367 [211/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.367 [212/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.367 [213/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.367 [214/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.367 [215/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:39.367 [216/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.367 [217/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:39.367 [218/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:39.367 [219/705] Linking static target lib/librte_latencystats.a 00:01:39.367 [220/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:39.367 [221/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:39.367 [222/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:39.367 [223/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:39.367 [224/705] Linking static target lib/librte_gso.a 00:01:39.367 [225/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:39.367 [226/705] Linking static target lib/librte_regexdev.a 00:01:39.367 [227/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:39.367 [228/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:39.367 [229/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:39.367 [230/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.367 [231/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:39.367 [232/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:39.367 [233/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:39.367 [234/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.367 [235/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.367 [236/705] Linking static target lib/librte_telemetry.a 00:01:39.367 [237/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.367 [238/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:39.367 [239/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.367 [240/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:39.367 [241/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:39.627 [242/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.627 [243/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.627 [244/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:39.627 [245/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:39.627 [246/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:39.627 [247/705] Linking static target lib/librte_mldev.a 00:01:39.627 [248/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:39.627 [249/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.627 [250/705] Linking static target lib/librte_ip_frag.a 00:01:39.627 [251/705] Linking static target lib/librte_eal.a 00:01:39.627 [252/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.627 [253/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.627 [254/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.627 [255/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:39.627 [256/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.627 [257/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [258/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.627 [259/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:39.627 [260/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.627 [261/705] Linking static target lib/librte_rawdev.a 00:01:39.627 [262/705] Linking static target lib/librte_rcu.a 00:01:39.627 [263/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.627 [264/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [265/705] Linking static target lib/librte_power.a 00:01:39.627 [266/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.627 [267/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:39.627 [268/705] Linking static target lib/librte_reorder.a 00:01:39.627 [269/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.627 [270/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [271/705] Linking static target lib/librte_security.a 00:01:39.627 [272/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [273/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:39.627 [274/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:39.627 [275/705] Linking static target lib/librte_pcapng.a 00:01:39.627 [276/705] Linking static target lib/librte_bpf.a 00:01:39.627 [277/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:39.627 [278/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.627 [279/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:39.627 [280/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [281/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:39.627 [282/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [283/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [284/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [285/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:39.627 [286/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:39.890 [287/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:39.890 [288/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.890 [289/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:39.890 [290/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.890 [291/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:39.890 [292/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.890 [293/705] Linking static target lib/librte_mbuf.a 00:01:39.890 [294/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:39.890 [295/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:39.890 [296/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.890 [297/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:39.890 [298/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:39.890 [299/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:39.890 [300/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:39.890 [301/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:39.890 [302/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:39.890 [303/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:39.890 [304/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:39.890 [305/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.890 [306/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:39.890 [307/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:39.890 [308/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.890 [309/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:39.890 [310/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:39.890 [311/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:39.890 [312/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:39.890 [313/705] Linking static target lib/librte_rib.a 00:01:39.890 [314/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:39.890 [315/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:39.890 [316/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:39.890 [317/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:39.890 [318/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:39.890 [319/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.890 [320/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.890 [321/705] Linking static target lib/librte_efd.a 00:01:39.890 [322/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:39.890 [323/705] Linking static target lib/librte_lpm.a 00:01:39.890 [324/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.890 [325/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:39.890 [326/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.890 [327/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.890 [328/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.890 [329/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:39.890 [330/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:39.890 [331/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:40.149 [332/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [333/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:40.149 [334/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:40.149 [335/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:40.149 [336/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:40.149 [337/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:40.149 [338/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:40.149 [339/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.149 [340/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [341/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:40.149 [342/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:40.149 [343/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [344/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:40.149 [345/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [346/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:40.149 [347/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [348/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:40.149 [349/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:40.149 [350/705] Linking static target lib/librte_fib.a 00:01:40.149 [351/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.149 [352/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.149 [353/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.149 [354/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:40.149 [355/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:40.149 [356/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:40.149 [357/705] Linking target lib/librte_telemetry.so.24.0 00:01:40.149 [358/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.149 [359/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.149 [360/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.149 [361/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:40.149 [362/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:40.149 [363/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:40.149 [364/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.415 [365/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.415 [366/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.415 [367/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:40.415 [368/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:40.415 [369/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:40.415 [370/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:40.415 [371/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.415 [372/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.415 [373/705] Linking static target lib/librte_graph.a 00:01:40.415 [374/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:40.415 [375/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:40.415 [376/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:40.415 [377/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:40.415 [378/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:40.415 [379/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:40.415 [380/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:40.415 [381/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:40.415 [382/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:40.415 [383/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:40.415 [384/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:40.415 [385/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:40.415 [386/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:40.415 [387/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:40.415 [388/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:40.415 [389/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.415 [390/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.415 [391/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:40.415 [392/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:40.415 [393/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.415 [394/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:40.415 [395/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:40.415 [396/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.415 [397/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:40.415 [398/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.415 [399/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:40.415 [400/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.677 [401/705] Linking static target drivers/librte_bus_vdev.a 00:01:40.677 [402/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.677 [403/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:40.677 [404/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [405/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:40.677 [406/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:40.677 [407/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:40.677 [408/705] Linking static target lib/librte_pdump.a 00:01:40.677 [409/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [410/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.677 [411/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:40.677 [412/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:40.677 [413/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:40.677 [414/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.677 [415/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [416/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:40.677 [417/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:40.677 [418/705] Linking static target lib/librte_table.a 00:01:40.677 [419/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.677 [420/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:40.677 [421/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:40.677 [422/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:40.677 [423/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [424/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:40.677 [425/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [426/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:40.677 [427/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:40.677 [428/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.677 [429/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.677 [430/705] Linking static target lib/librte_cryptodev.a 00:01:40.677 [431/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:40.677 [432/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:40.677 [433/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:40.677 [434/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:40.677 [435/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:40.677 [436/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.677 [437/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:40.677 [438/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:40.677 [439/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:40.677 [440/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.677 [441/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:40.677 [442/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.677 [443/705] Linking static target drivers/librte_bus_pci.a 00:01:40.936 [444/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:40.936 [445/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:40.936 [446/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:40.936 [447/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:40.936 [448/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:40.936 [449/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:40.936 [450/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:40.936 [451/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:40.936 [452/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:40.936 [453/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:40.936 [454/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.936 [455/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:40.936 [456/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:40.936 [457/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.936 [458/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.936 [459/705] Linking static target lib/librte_sched.a 00:01:40.936 [460/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:40.936 [461/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:40.936 [462/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.936 [463/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:40.936 [464/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:40.936 [465/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:40.936 [466/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:40.936 [467/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.936 [468/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:40.936 [469/705] Linking static target lib/librte_ipsec.a 00:01:40.936 [470/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:40.936 [471/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:40.936 [472/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:40.936 [473/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:40.936 [474/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:40.936 [475/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:40.936 [476/705] Linking static target lib/librte_node.a 00:01:40.936 [477/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:40.936 [478/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:40.936 [479/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:40.936 [480/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:40.936 [481/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:40.936 [482/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:40.936 [483/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:40.936 [484/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:40.936 [485/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:40.936 [486/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:40.936 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:41.194 [488/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.194 [489/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.194 [490/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.194 [491/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:41.194 [492/705] Linking static target drivers/librte_mempool_ring.a 00:01:41.194 [493/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:41.194 [494/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:41.194 [495/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:41.194 [496/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:41.194 [497/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:41.194 [498/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:41.194 [499/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.194 [500/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:41.194 [501/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:41.194 [502/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:41.194 [503/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:41.194 [504/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:41.194 [505/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:41.194 [506/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.194 [507/705] Linking static target lib/librte_pdcp.a 00:01:41.194 [508/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:41.194 [509/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:41.194 [510/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:41.194 [511/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:41.194 [512/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:41.195 [513/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:41.195 [514/705] Linking static target lib/librte_member.a 00:01:41.195 [515/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:41.195 [516/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:41.195 [517/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:41.195 [518/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:41.195 [519/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:41.195 [520/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:41.195 [521/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:41.195 [522/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [523/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:41.455 [524/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [525/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [526/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:41.455 [527/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:41.455 [528/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:41.455 [529/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [530/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [531/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:41.455 [532/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:41.455 [533/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:41.455 [534/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:41.455 [535/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:41.455 [536/705] Linking static target lib/librte_port.a 00:01:41.455 [537/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:41.455 [538/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:41.455 [539/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:41.455 [540/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:41.455 [541/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:41.455 [542/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:41.455 [543/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.455 [544/705] Linking static target lib/acl/libavx2_tmp.a 00:01:41.455 [545/705] Linking static target lib/librte_eventdev.a 00:01:41.455 [546/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:41.455 [547/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:41.455 [548/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.455 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:41.455 [550/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:41.455 [551/705] Linking static target lib/librte_acl.a 00:01:41.455 [552/705] Linking static target lib/librte_hash.a 00:01:41.455 [553/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:41.717 [554/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:41.717 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:41.717 [556/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.717 [557/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.717 [558/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:41.717 [559/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:41.717 [560/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:41.717 [561/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:41.717 [562/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:41.717 [563/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:41.717 [564/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:41.717 [565/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:41.978 [566/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:41.978 [567/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.240 [568/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:42.240 [569/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:42.240 [570/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.240 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:42.502 [572/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.502 [573/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:42.502 [574/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.502 [575/705] Linking static target lib/librte_ethdev.a 00:01:42.502 [576/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:42.502 [577/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.074 [578/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:43.336 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:43.336 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:43.336 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:43.598 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.598 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:43.598 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:43.598 [585/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:43.598 [586/705] Linking static target drivers/librte_net_i40e.a 00:01:44.543 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:44.804 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.804 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:45.066 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.286 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:49.286 [592/705] Linking static target lib/librte_pipeline.a 00:01:50.230 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.230 [594/705] Linking static target lib/librte_vhost.a 00:01:50.491 [595/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.491 [596/705] Linking target lib/librte_eal.so.24.0 00:01:50.491 [597/705] Linking target app/dpdk-test-cmdline 00:01:50.491 [598/705] Linking target app/dpdk-test-compress-perf 00:01:50.491 [599/705] Linking target app/dpdk-dumpcap 00:01:50.491 [600/705] Linking target app/dpdk-test-acl 00:01:50.491 [601/705] Linking target app/dpdk-test-sad 00:01:50.491 [602/705] Linking target app/dpdk-test-gpudev 00:01:50.491 [603/705] Linking target app/dpdk-pdump 00:01:50.491 [604/705] Linking target app/dpdk-test-crypto-perf 00:01:50.753 [605/705] Linking target app/dpdk-test-eventdev 00:01:50.753 [606/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:50.753 [607/705] Linking target lib/librte_stack.so.24.0 00:01:50.753 [608/705] Linking target lib/librte_dmadev.so.24.0 00:01:50.753 [609/705] Linking target lib/librte_ring.so.24.0 00:01:50.753 [610/705] Linking target lib/librte_pci.so.24.0 00:01:50.753 [611/705] Linking target lib/librte_rawdev.so.24.0 00:01:50.753 [612/705] Linking target lib/librte_cfgfile.so.24.0 00:01:50.753 [613/705] Linking target lib/librte_meter.so.24.0 00:01:50.753 [614/705] Linking target lib/librte_timer.so.24.0 00:01:50.753 [615/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:50.753 [616/705] Linking target app/dpdk-graph 00:01:50.753 [617/705] Linking target lib/librte_jobstats.so.24.0 00:01:50.753 [618/705] Linking target lib/librte_acl.so.24.0 00:01:50.753 [619/705] Linking target app/dpdk-proc-info 00:01:50.753 [620/705] Linking target app/dpdk-test-regex 00:01:50.753 [621/705] Linking target app/dpdk-test-fib 00:01:50.753 [622/705] Linking target app/dpdk-test-dma-perf 00:01:50.753 [623/705] Linking target app/dpdk-test-flow-perf 00:01:50.753 [624/705] Linking target app/dpdk-test-security-perf 00:01:50.753 [625/705] Linking target app/dpdk-test-bbdev 00:01:50.753 [626/705] Linking target app/dpdk-test-mldev 00:01:50.753 [627/705] Linking target app/dpdk-test-pipeline 00:01:50.753 [628/705] Linking target app/dpdk-testpmd 00:01:50.753 [629/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:50.753 [630/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:50.753 [631/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:50.753 [632/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:50.753 [633/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:50.753 [634/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:50.753 [635/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:50.753 [636/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.753 [637/705] Linking target lib/librte_rcu.so.24.0 00:01:50.753 [638/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:50.753 [639/705] Linking target lib/librte_mempool.so.24.0 00:01:51.015 [640/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:51.015 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:51.015 [642/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:51.015 [643/705] Linking target lib/librte_rib.so.24.0 00:01:51.015 [644/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:51.015 [645/705] Linking target lib/librte_mbuf.so.24.0 00:01:51.277 [646/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:51.277 [647/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:51.277 [648/705] Linking target lib/librte_fib.so.24.0 00:01:51.277 [649/705] Linking target lib/librte_compressdev.so.24.0 00:01:51.277 [650/705] Linking target lib/librte_net.so.24.0 00:01:51.277 [651/705] Linking target lib/librte_bbdev.so.24.0 00:01:51.277 [652/705] Linking target lib/librte_distributor.so.24.0 00:01:51.277 [653/705] Linking target lib/librte_gpudev.so.24.0 00:01:51.277 [654/705] Linking target lib/librte_cryptodev.so.24.0 00:01:51.277 [655/705] Linking target lib/librte_regexdev.so.24.0 00:01:51.277 [656/705] Linking target lib/librte_reorder.so.24.0 00:01:51.277 [657/705] Linking target lib/librte_mldev.so.24.0 00:01:51.277 [658/705] Linking target lib/librte_sched.so.24.0 00:01:51.277 [659/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:51.277 [660/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:51.277 [661/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:51.277 [662/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:51.538 [663/705] Linking target lib/librte_cmdline.so.24.0 00:01:51.538 [664/705] Linking target lib/librte_hash.so.24.0 00:01:51.538 [665/705] Linking target lib/librte_security.so.24.0 00:01:51.538 [666/705] Linking target lib/librte_ethdev.so.24.0 00:01:51.538 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:51.538 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:51.538 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:51.538 [670/705] Linking target lib/librte_efd.so.24.0 00:01:51.538 [671/705] Linking target lib/librte_lpm.so.24.0 00:01:51.538 [672/705] Linking target lib/librte_member.so.24.0 00:01:51.538 [673/705] Linking target lib/librte_pdcp.so.24.0 00:01:51.538 [674/705] Linking target lib/librte_ipsec.so.24.0 00:01:51.538 [675/705] Linking target lib/librte_metrics.so.24.0 00:01:51.538 [676/705] Linking target lib/librte_bpf.so.24.0 00:01:51.538 [677/705] Linking target lib/librte_gso.so.24.0 00:01:51.538 [678/705] Linking target lib/librte_gro.so.24.0 00:01:51.538 [679/705] Linking target lib/librte_pcapng.so.24.0 00:01:51.538 [680/705] Linking target lib/librte_ip_frag.so.24.0 00:01:51.538 [681/705] Linking target lib/librte_power.so.24.0 00:01:51.538 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:51.800 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:51.800 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:51.800 [685/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:51.800 [686/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:51.800 [687/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:51.800 [688/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:51.800 [689/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:51.800 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:51.800 [691/705] Linking target lib/librte_pdump.so.24.0 00:01:51.800 [692/705] Linking target lib/librte_latencystats.so.24.0 00:01:51.800 [693/705] Linking target lib/librte_bitratestats.so.24.0 00:01:51.800 [694/705] Linking target lib/librte_graph.so.24.0 00:01:51.800 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:01:51.800 [696/705] Linking target lib/librte_port.so.24.0 00:01:52.061 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:52.061 [698/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:52.061 [699/705] Linking target lib/librte_node.so.24.0 00:01:52.061 [700/705] Linking target lib/librte_table.so.24.0 00:01:52.322 [701/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:52.322 [702/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.322 [703/705] Linking target lib/librte_vhost.so.24.0 00:01:54.240 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.240 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:54.240 13:12:51 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:54.240 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:54.240 [0/1] Installing files. 00:01:54.506 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:54.506 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.507 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:54.508 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.509 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.510 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:54.511 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:54.511 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.511 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.512 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.777 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:54.778 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:54.778 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:54.778 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:54.778 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:54.778 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.778 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.779 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:54.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:54.782 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:54.782 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:54.782 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:54.782 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:54.782 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:54.782 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:54.782 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:54.782 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:54.782 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:54.782 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:54.782 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:54.782 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:54.782 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:54.782 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:54.782 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:54.782 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:54.782 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:54.782 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:54.782 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:54.782 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:54.782 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:54.782 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:54.782 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:54.782 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:54.782 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:54.782 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:54.782 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:54.782 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:54.782 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:54.782 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:54.782 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:54.782 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:54.782 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:54.782 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:54.782 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:54.782 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:54.782 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:54.782 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:54.782 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:54.782 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:54.782 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:54.782 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:54.782 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:54.782 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:54.782 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:54.782 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:54.782 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:54.782 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:54.782 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:54.782 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:54.783 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:54.783 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:54.783 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:54.783 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:54.783 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:54.783 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:54.783 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:54.783 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:54.783 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:54.783 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:54.783 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:54.783 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:54.783 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:54.783 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:54.783 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:54.783 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:54.783 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:54.783 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:54.783 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:54.783 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:54.783 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:54.783 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:54.783 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:54.783 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:54.783 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:54.783 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:54.783 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:54.783 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:54.783 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:54.783 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:54.783 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:54.783 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:54.783 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:54.783 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:54.783 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:54.783 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:54.783 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:54.783 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:54.783 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:54.783 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:54.783 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:54.783 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:54.783 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:54.783 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:54.783 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:54.783 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:54.783 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:54.783 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:54.783 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:54.783 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:54.783 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:54.783 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:54.783 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:54.783 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:54.783 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:54.783 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:54.783 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:54.783 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:54.783 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:54.783 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:54.783 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:54.783 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:54.783 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:54.783 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:54.783 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:54.783 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:54.783 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:54.783 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:54.783 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:54.783 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:54.783 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:54.783 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:54.783 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:54.783 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:54.783 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:54.783 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:54.783 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:54.783 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:54.783 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:54.783 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:54.783 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:54.783 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:54.783 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:54.783 13:12:52 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:54.783 13:12:52 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:54.783 13:12:52 -- common/autobuild_common.sh@203 -- $ cat 00:01:54.783 13:12:52 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.783 00:01:54.783 real 0m23.613s 00:01:54.783 user 7m5.017s 00:01:54.783 sys 2m46.072s 00:01:54.783 13:12:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.783 13:12:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.783 ************************************ 00:01:54.783 END TEST build_native_dpdk 00:01:54.783 ************************************ 00:01:55.045 13:12:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.045 13:12:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.045 13:12:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:55.045 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:55.306 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:55.306 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:55.306 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:55.567 Using 'verbs' RDMA provider 00:02:08.794 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:23.711 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:23.711 Creating mk/config.mk...done. 00:02:23.711 Creating mk/cc.flags.mk...done. 00:02:23.711 Type 'make' to build. 00:02:23.711 13:13:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:23.711 13:13:19 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:23.711 13:13:19 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:23.711 13:13:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.711 ************************************ 00:02:23.711 START TEST make 00:02:23.711 ************************************ 00:02:23.711 13:13:19 -- common/autotest_common.sh@1104 -- $ make -j144 00:02:23.711 make[1]: Nothing to be done for 'all'. 00:02:23.711 The Meson build system 00:02:23.711 Version: 1.3.1 00:02:23.711 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:23.711 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:23.711 Build type: native build 00:02:23.711 Project name: libvfio-user 00:02:23.711 Project version: 0.0.1 00:02:23.711 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:23.711 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:23.711 Host machine cpu family: x86_64 00:02:23.711 Host machine cpu: x86_64 00:02:23.711 Run-time dependency threads found: YES 00:02:23.711 Library dl found: YES 00:02:23.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:23.711 Run-time dependency json-c found: YES 0.17 00:02:23.711 Run-time dependency cmocka found: YES 1.1.7 00:02:23.711 Program pytest-3 found: NO 00:02:23.711 Program flake8 found: NO 00:02:23.711 Program misspell-fixer found: NO 00:02:23.711 Program restructuredtext-lint found: NO 00:02:23.711 Program valgrind found: YES (/usr/bin/valgrind) 00:02:23.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.711 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.711 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:23.712 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:23.712 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:23.712 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:23.712 Build targets in project: 8 00:02:23.712 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:23.712 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:23.712 00:02:23.712 libvfio-user 0.0.1 00:02:23.712 00:02:23.712 User defined options 00:02:23.712 buildtype : debug 00:02:23.712 default_library: shared 00:02:23.712 libdir : /usr/local/lib 00:02:23.712 00:02:23.712 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.971 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:24.230 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:24.230 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:24.230 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:24.230 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:24.230 [5/37] Compiling C object samples/null.p/null.c.o 00:02:24.230 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:24.230 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:24.230 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:24.230 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:24.230 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:24.230 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:24.230 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:24.230 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:24.230 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:24.230 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:24.230 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:24.230 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:24.230 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:24.230 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:24.230 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:24.230 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:24.230 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:24.230 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:24.230 [24/37] Compiling C object samples/server.p/server.c.o 00:02:24.230 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:24.230 [26/37] Compiling C object samples/client.p/client.c.o 00:02:24.230 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:24.230 [28/37] Linking target samples/client 00:02:24.230 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:24.490 [30/37] Linking target test/unit_tests 00:02:24.490 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:24.490 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:24.490 [33/37] Linking target samples/null 00:02:24.490 [34/37] Linking target samples/lspci 00:02:24.490 [35/37] Linking target samples/gpio-pci-idio-16 00:02:24.490 [36/37] Linking target samples/server 00:02:24.490 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:24.490 INFO: autodetecting backend as ninja 00:02:24.490 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.751 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.014 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.014 ninja: no work to do. 00:02:33.167 CC lib/ut_mock/mock.o 00:02:33.167 CC lib/log/log.o 00:02:33.167 CC lib/log/log_flags.o 00:02:33.167 CC lib/log/log_deprecated.o 00:02:33.167 CC lib/ut/ut.o 00:02:33.167 LIB libspdk_ut_mock.a 00:02:33.167 LIB libspdk_ut.a 00:02:33.167 SO libspdk_ut.so.1.0 00:02:33.167 SO libspdk_ut_mock.so.5.0 00:02:33.167 LIB libspdk_log.a 00:02:33.167 SYMLINK libspdk_ut.so 00:02:33.167 SO libspdk_log.so.6.1 00:02:33.167 SYMLINK libspdk_ut_mock.so 00:02:33.167 SYMLINK libspdk_log.so 00:02:33.167 CC lib/util/base64.o 00:02:33.167 CC lib/util/bit_array.o 00:02:33.167 CC lib/util/cpuset.o 00:02:33.167 CC lib/util/crc16.o 00:02:33.167 CC lib/util/crc32.o 00:02:33.167 CC lib/util/crc32c.o 00:02:33.167 CC lib/util/crc32_ieee.o 00:02:33.167 CC lib/ioat/ioat.o 00:02:33.167 CC lib/util/crc64.o 00:02:33.167 CC lib/dma/dma.o 00:02:33.167 CC lib/util/dif.o 00:02:33.167 CC lib/util/file.o 00:02:33.167 CC lib/util/fd.o 00:02:33.167 CC lib/util/hexlify.o 00:02:33.167 CC lib/util/iov.o 00:02:33.167 CC lib/util/pipe.o 00:02:33.167 CC lib/util/math.o 00:02:33.167 CXX lib/trace_parser/trace.o 00:02:33.167 CC lib/util/strerror_tls.o 00:02:33.167 CC lib/util/string.o 00:02:33.167 CC lib/util/fd_group.o 00:02:33.167 CC lib/util/uuid.o 00:02:33.167 CC lib/util/xor.o 00:02:33.167 CC lib/util/zipf.o 00:02:33.167 CC lib/vfio_user/host/vfio_user_pci.o 00:02:33.167 CC lib/vfio_user/host/vfio_user.o 00:02:33.167 LIB libspdk_dma.a 00:02:33.428 SO libspdk_dma.so.3.0 00:02:33.428 LIB libspdk_ioat.a 00:02:33.429 SO libspdk_ioat.so.6.0 00:02:33.429 SYMLINK libspdk_dma.so 00:02:33.429 LIB libspdk_vfio_user.a 00:02:33.429 SYMLINK libspdk_ioat.so 00:02:33.429 SO libspdk_vfio_user.so.4.0 00:02:33.429 SYMLINK libspdk_vfio_user.so 00:02:33.429 LIB libspdk_util.a 00:02:33.690 SO libspdk_util.so.8.0 00:02:33.690 SYMLINK libspdk_util.so 00:02:33.952 LIB libspdk_trace_parser.a 00:02:33.952 SO libspdk_trace_parser.so.4.0 00:02:33.952 CC lib/idxd/idxd.o 00:02:33.952 CC lib/json/json_parse.o 00:02:33.952 CC lib/idxd/idxd_user.o 00:02:33.952 CC lib/json/json_util.o 00:02:33.952 CC lib/json/json_write.o 00:02:33.952 CC lib/idxd/idxd_kernel.o 00:02:33.952 CC lib/conf/conf.o 00:02:33.952 CC lib/rdma/common.o 00:02:33.952 CC lib/rdma/rdma_verbs.o 00:02:33.952 CC lib/vmd/vmd.o 00:02:33.952 CC lib/vmd/led.o 00:02:33.952 CC lib/env_dpdk/env.o 00:02:33.952 CC lib/env_dpdk/pci.o 00:02:33.952 CC lib/env_dpdk/init.o 00:02:33.952 CC lib/env_dpdk/memory.o 00:02:33.952 CC lib/env_dpdk/threads.o 00:02:33.952 CC lib/env_dpdk/pci_ioat.o 00:02:33.952 CC lib/env_dpdk/pci_virtio.o 00:02:33.952 CC lib/env_dpdk/pci_vmd.o 00:02:33.952 CC lib/env_dpdk/pci_idxd.o 00:02:33.952 CC lib/env_dpdk/pci_event.o 00:02:33.952 CC lib/env_dpdk/sigbus_handler.o 00:02:33.952 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.952 CC lib/env_dpdk/pci_dpdk.o 00:02:33.952 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.952 SYMLINK libspdk_trace_parser.so 00:02:34.214 LIB libspdk_conf.a 00:02:34.214 LIB libspdk_json.a 00:02:34.214 SO libspdk_conf.so.5.0 00:02:34.214 LIB libspdk_rdma.a 00:02:34.214 SO libspdk_json.so.5.1 00:02:34.214 SO libspdk_rdma.so.5.0 00:02:34.476 SYMLINK libspdk_conf.so 00:02:34.476 SYMLINK libspdk_json.so 00:02:34.476 SYMLINK libspdk_rdma.so 00:02:34.476 LIB libspdk_idxd.a 00:02:34.476 SO libspdk_idxd.so.11.0 00:02:34.476 SYMLINK libspdk_idxd.so 00:02:34.476 LIB libspdk_vmd.a 00:02:34.476 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.476 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.476 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.476 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.737 SO libspdk_vmd.so.5.0 00:02:34.737 SYMLINK libspdk_vmd.so 00:02:34.737 LIB libspdk_jsonrpc.a 00:02:34.998 SO libspdk_jsonrpc.so.5.1 00:02:34.998 SYMLINK libspdk_jsonrpc.so 00:02:35.259 LIB libspdk_env_dpdk.a 00:02:35.259 CC lib/rpc/rpc.o 00:02:35.259 SO libspdk_env_dpdk.so.13.0 00:02:35.521 SYMLINK libspdk_env_dpdk.so 00:02:35.521 LIB libspdk_rpc.a 00:02:35.521 SO libspdk_rpc.so.5.0 00:02:35.521 SYMLINK libspdk_rpc.so 00:02:35.782 CC lib/trace/trace.o 00:02:35.782 CC lib/trace/trace_flags.o 00:02:35.782 CC lib/trace/trace_rpc.o 00:02:35.782 CC lib/notify/notify.o 00:02:35.782 CC lib/notify/notify_rpc.o 00:02:35.782 CC lib/sock/sock.o 00:02:35.782 CC lib/sock/sock_rpc.o 00:02:35.782 LIB libspdk_notify.a 00:02:36.079 SO libspdk_notify.so.5.0 00:02:36.079 LIB libspdk_trace.a 00:02:36.079 SYMLINK libspdk_notify.so 00:02:36.079 SO libspdk_trace.so.9.0 00:02:36.079 SYMLINK libspdk_trace.so 00:02:36.079 LIB libspdk_sock.a 00:02:36.079 SO libspdk_sock.so.8.0 00:02:36.339 SYMLINK libspdk_sock.so 00:02:36.339 CC lib/thread/thread.o 00:02:36.339 CC lib/thread/iobuf.o 00:02:36.601 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.601 CC lib/nvme/nvme_ctrlr.o 00:02:36.601 CC lib/nvme/nvme_fabric.o 00:02:36.601 CC lib/nvme/nvme_ns_cmd.o 00:02:36.601 CC lib/nvme/nvme_ns.o 00:02:36.601 CC lib/nvme/nvme_pcie_common.o 00:02:36.601 CC lib/nvme/nvme_pcie.o 00:02:36.601 CC lib/nvme/nvme_qpair.o 00:02:36.601 CC lib/nvme/nvme.o 00:02:36.601 CC lib/nvme/nvme_quirks.o 00:02:36.601 CC lib/nvme/nvme_transport.o 00:02:36.601 CC lib/nvme/nvme_discovery.o 00:02:36.601 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.601 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.601 CC lib/nvme/nvme_tcp.o 00:02:36.601 CC lib/nvme/nvme_opal.o 00:02:36.601 CC lib/nvme/nvme_io_msg.o 00:02:36.601 CC lib/nvme/nvme_poll_group.o 00:02:36.601 CC lib/nvme/nvme_zns.o 00:02:36.601 CC lib/nvme/nvme_cuse.o 00:02:36.601 CC lib/nvme/nvme_vfio_user.o 00:02:36.601 CC lib/nvme/nvme_rdma.o 00:02:37.547 LIB libspdk_thread.a 00:02:37.547 SO libspdk_thread.so.9.0 00:02:37.809 SYMLINK libspdk_thread.so 00:02:38.071 CC lib/blob/blobstore.o 00:02:38.071 CC lib/blob/request.o 00:02:38.071 CC lib/blob/zeroes.o 00:02:38.071 CC lib/blob/blob_bs_dev.o 00:02:38.071 CC lib/vfu_tgt/tgt_rpc.o 00:02:38.071 CC lib/accel/accel.o 00:02:38.071 CC lib/vfu_tgt/tgt_endpoint.o 00:02:38.071 CC lib/accel/accel_rpc.o 00:02:38.071 CC lib/init/json_config.o 00:02:38.071 CC lib/init/subsystem.o 00:02:38.071 CC lib/accel/accel_sw.o 00:02:38.071 CC lib/init/subsystem_rpc.o 00:02:38.071 CC lib/init/rpc.o 00:02:38.071 CC lib/virtio/virtio.o 00:02:38.071 CC lib/virtio/virtio_vhost_user.o 00:02:38.071 CC lib/virtio/virtio_vfio_user.o 00:02:38.071 CC lib/virtio/virtio_pci.o 00:02:38.333 LIB libspdk_init.a 00:02:38.333 LIB libspdk_nvme.a 00:02:38.333 SO libspdk_init.so.4.0 00:02:38.333 LIB libspdk_virtio.a 00:02:38.333 LIB libspdk_vfu_tgt.a 00:02:38.333 SYMLINK libspdk_init.so 00:02:38.333 SO libspdk_vfu_tgt.so.2.0 00:02:38.333 SO libspdk_virtio.so.6.0 00:02:38.333 SO libspdk_nvme.so.12.0 00:02:38.333 SYMLINK libspdk_vfu_tgt.so 00:02:38.333 SYMLINK libspdk_virtio.so 00:02:38.594 CC lib/event/app.o 00:02:38.594 CC lib/event/reactor.o 00:02:38.594 CC lib/event/log_rpc.o 00:02:38.594 CC lib/event/app_rpc.o 00:02:38.594 CC lib/event/scheduler_static.o 00:02:38.594 SYMLINK libspdk_nvme.so 00:02:38.856 LIB libspdk_accel.a 00:02:38.856 SO libspdk_accel.so.14.0 00:02:39.118 LIB libspdk_event.a 00:02:39.118 SYMLINK libspdk_accel.so 00:02:39.118 SO libspdk_event.so.12.0 00:02:39.118 SYMLINK libspdk_event.so 00:02:39.118 CC lib/bdev/bdev.o 00:02:39.118 CC lib/bdev/bdev_rpc.o 00:02:39.118 CC lib/bdev/bdev_zone.o 00:02:39.118 CC lib/bdev/part.o 00:02:39.118 CC lib/bdev/scsi_nvme.o 00:02:40.509 LIB libspdk_blob.a 00:02:40.509 SO libspdk_blob.so.10.1 00:02:40.509 SYMLINK libspdk_blob.so 00:02:40.770 CC lib/blobfs/blobfs.o 00:02:40.770 CC lib/blobfs/tree.o 00:02:40.770 CC lib/lvol/lvol.o 00:02:41.344 LIB libspdk_bdev.a 00:02:41.344 LIB libspdk_blobfs.a 00:02:41.344 SO libspdk_bdev.so.14.0 00:02:41.344 SO libspdk_blobfs.so.9.0 00:02:41.344 LIB libspdk_lvol.a 00:02:41.606 SO libspdk_lvol.so.9.1 00:02:41.606 SYMLINK libspdk_blobfs.so 00:02:41.606 SYMLINK libspdk_bdev.so 00:02:41.606 SYMLINK libspdk_lvol.so 00:02:41.606 CC lib/nbd/nbd.o 00:02:41.606 CC lib/nbd/nbd_rpc.o 00:02:41.606 CC lib/ublk/ublk.o 00:02:41.606 CC lib/ublk/ublk_rpc.o 00:02:41.606 CC lib/nvmf/ctrlr.o 00:02:41.606 CC lib/nvmf/ctrlr_discovery.o 00:02:41.606 CC lib/nvmf/ctrlr_bdev.o 00:02:41.867 CC lib/nvmf/subsystem.o 00:02:41.867 CC lib/nvmf/nvmf.o 00:02:41.867 CC lib/ftl/ftl_core.o 00:02:41.867 CC lib/nvmf/nvmf_rpc.o 00:02:41.867 CC lib/scsi/dev.o 00:02:41.867 CC lib/scsi/port.o 00:02:41.867 CC lib/ftl/ftl_init.o 00:02:41.867 CC lib/ftl/ftl_layout.o 00:02:41.867 CC lib/nvmf/transport.o 00:02:41.867 CC lib/scsi/lun.o 00:02:41.867 CC lib/ftl/ftl_debug.o 00:02:41.867 CC lib/nvmf/tcp.o 00:02:41.867 CC lib/ftl/ftl_io.o 00:02:41.867 CC lib/scsi/scsi.o 00:02:41.867 CC lib/nvmf/vfio_user.o 00:02:41.867 CC lib/ftl/ftl_sb.o 00:02:41.867 CC lib/scsi/scsi_bdev.o 00:02:41.867 CC lib/ftl/ftl_l2p.o 00:02:41.867 CC lib/nvmf/rdma.o 00:02:41.867 CC lib/ftl/ftl_l2p_flat.o 00:02:41.867 CC lib/scsi/scsi_pr.o 00:02:41.867 CC lib/ftl/ftl_nv_cache.o 00:02:41.867 CC lib/scsi/scsi_rpc.o 00:02:41.867 CC lib/ftl/ftl_band.o 00:02:41.867 CC lib/scsi/task.o 00:02:41.867 CC lib/ftl/ftl_band_ops.o 00:02:41.867 CC lib/ftl/ftl_writer.o 00:02:41.867 CC lib/ftl/ftl_rq.o 00:02:41.867 CC lib/ftl/ftl_reloc.o 00:02:41.867 CC lib/ftl/ftl_l2p_cache.o 00:02:41.867 CC lib/ftl/ftl_p2l.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.867 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.867 CC lib/ftl/utils/ftl_conf.o 00:02:41.867 CC lib/ftl/utils/ftl_md.o 00:02:41.867 CC lib/ftl/utils/ftl_mempool.o 00:02:41.867 CC lib/ftl/utils/ftl_property.o 00:02:41.867 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.867 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.867 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.867 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.867 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.867 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.867 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.867 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.867 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.867 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.867 CC lib/ftl/base/ftl_base_dev.o 00:02:41.867 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.867 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.867 CC lib/ftl/ftl_trace.o 00:02:42.128 LIB libspdk_nbd.a 00:02:42.128 SO libspdk_nbd.so.6.0 00:02:42.390 LIB libspdk_scsi.a 00:02:42.390 SYMLINK libspdk_nbd.so 00:02:42.390 SO libspdk_scsi.so.8.0 00:02:42.390 LIB libspdk_ublk.a 00:02:42.390 SYMLINK libspdk_scsi.so 00:02:42.390 SO libspdk_ublk.so.2.0 00:02:42.390 SYMLINK libspdk_ublk.so 00:02:42.652 CC lib/vhost/vhost.o 00:02:42.652 CC lib/vhost/vhost_rpc.o 00:02:42.652 CC lib/vhost/vhost_scsi.o 00:02:42.652 CC lib/vhost/vhost_blk.o 00:02:42.652 CC lib/vhost/rte_vhost_user.o 00:02:42.652 CC lib/iscsi/conn.o 00:02:42.652 CC lib/iscsi/iscsi.o 00:02:42.652 CC lib/iscsi/init_grp.o 00:02:42.652 CC lib/iscsi/md5.o 00:02:42.652 CC lib/iscsi/param.o 00:02:42.652 CC lib/iscsi/portal_grp.o 00:02:42.652 LIB libspdk_ftl.a 00:02:42.652 CC lib/iscsi/tgt_node.o 00:02:42.652 CC lib/iscsi/iscsi_subsystem.o 00:02:42.652 CC lib/iscsi/iscsi_rpc.o 00:02:42.652 CC lib/iscsi/task.o 00:02:42.913 SO libspdk_ftl.so.8.0 00:02:43.175 SYMLINK libspdk_ftl.so 00:02:43.437 LIB libspdk_nvmf.a 00:02:43.437 LIB libspdk_vhost.a 00:02:43.699 SO libspdk_nvmf.so.17.0 00:02:43.699 SO libspdk_vhost.so.7.1 00:02:43.699 SYMLINK libspdk_vhost.so 00:02:43.699 SYMLINK libspdk_nvmf.so 00:02:43.699 LIB libspdk_iscsi.a 00:02:43.960 SO libspdk_iscsi.so.7.0 00:02:43.960 SYMLINK libspdk_iscsi.so 00:02:44.532 CC module/vfu_device/vfu_virtio.o 00:02:44.532 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.532 CC module/vfu_device/vfu_virtio_blk.o 00:02:44.532 CC module/vfu_device/vfu_virtio_scsi.o 00:02:44.532 CC module/vfu_device/vfu_virtio_rpc.o 00:02:44.532 CC module/accel/ioat/accel_ioat.o 00:02:44.532 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.532 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.532 CC module/accel/dsa/accel_dsa.o 00:02:44.532 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.532 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.532 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.532 CC module/blob/bdev/blob_bdev.o 00:02:44.532 CC module/accel/iaa/accel_iaa.o 00:02:44.532 CC module/accel/error/accel_error.o 00:02:44.532 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.532 CC module/accel/error/accel_error_rpc.o 00:02:44.532 CC module/sock/posix/posix.o 00:02:44.532 LIB libspdk_env_dpdk_rpc.a 00:02:44.532 SO libspdk_env_dpdk_rpc.so.5.0 00:02:44.532 LIB libspdk_scheduler_gscheduler.a 00:02:44.532 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.532 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.794 SO libspdk_scheduler_gscheduler.so.3.0 00:02:44.794 LIB libspdk_accel_ioat.a 00:02:44.794 LIB libspdk_accel_error.a 00:02:44.794 LIB libspdk_scheduler_dynamic.a 00:02:44.794 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:44.794 LIB libspdk_accel_iaa.a 00:02:44.794 SO libspdk_accel_ioat.so.5.0 00:02:44.794 SO libspdk_accel_error.so.1.0 00:02:44.794 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.794 SO libspdk_scheduler_dynamic.so.3.0 00:02:44.794 LIB libspdk_accel_dsa.a 00:02:44.794 LIB libspdk_blob_bdev.a 00:02:44.794 SO libspdk_accel_iaa.so.2.0 00:02:44.794 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.794 SYMLINK libspdk_accel_error.so 00:02:44.794 SYMLINK libspdk_accel_ioat.so 00:02:44.794 SO libspdk_accel_dsa.so.4.0 00:02:44.794 SO libspdk_blob_bdev.so.10.1 00:02:44.794 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.794 SYMLINK libspdk_accel_iaa.so 00:02:44.794 SYMLINK libspdk_accel_dsa.so 00:02:44.794 SYMLINK libspdk_blob_bdev.so 00:02:45.055 LIB libspdk_vfu_device.a 00:02:45.055 SO libspdk_vfu_device.so.2.0 00:02:45.055 SYMLINK libspdk_vfu_device.so 00:02:45.055 LIB libspdk_sock_posix.a 00:02:45.316 SO libspdk_sock_posix.so.5.0 00:02:45.316 CC module/bdev/split/vbdev_split.o 00:02:45.316 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.316 CC module/bdev/null/bdev_null.o 00:02:45.316 CC module/bdev/null/bdev_null_rpc.o 00:02:45.316 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.316 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.316 CC module/bdev/malloc/bdev_malloc.o 00:02:45.316 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.316 CC module/bdev/aio/bdev_aio.o 00:02:45.316 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.316 CC module/bdev/nvme/bdev_nvme.o 00:02:45.316 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.316 CC module/bdev/nvme/nvme_rpc.o 00:02:45.316 CC module/bdev/error/vbdev_error.o 00:02:45.316 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.316 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.316 CC module/bdev/nvme/vbdev_opal.o 00:02:45.316 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.316 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.316 CC module/bdev/raid/bdev_raid.o 00:02:45.316 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:45.316 CC module/bdev/iscsi/bdev_iscsi.o 00:02:45.316 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.316 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:45.316 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:45.316 CC module/bdev/raid/bdev_raid_sb.o 00:02:45.316 CC module/bdev/gpt/gpt.o 00:02:45.316 CC module/bdev/raid/raid0.o 00:02:45.316 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.316 CC module/bdev/delay/vbdev_delay.o 00:02:45.316 CC module/bdev/raid/raid1.o 00:02:45.316 CC module/bdev/raid/concat.o 00:02:45.316 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:45.316 CC module/bdev/ftl/bdev_ftl.o 00:02:45.316 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.316 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.316 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:45.316 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.316 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.316 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:45.316 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:45.316 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.316 SYMLINK libspdk_sock_posix.so 00:02:45.577 LIB libspdk_blobfs_bdev.a 00:02:45.577 SO libspdk_blobfs_bdev.so.5.0 00:02:45.577 LIB libspdk_bdev_split.a 00:02:45.577 LIB libspdk_bdev_null.a 00:02:45.577 LIB libspdk_bdev_error.a 00:02:45.577 SYMLINK libspdk_blobfs_bdev.so 00:02:45.577 SO libspdk_bdev_split.so.5.0 00:02:45.577 LIB libspdk_bdev_aio.a 00:02:45.577 LIB libspdk_bdev_passthru.a 00:02:45.577 SO libspdk_bdev_null.so.5.0 00:02:45.577 LIB libspdk_bdev_gpt.a 00:02:45.577 SO libspdk_bdev_error.so.5.0 00:02:45.577 LIB libspdk_bdev_ftl.a 00:02:45.577 LIB libspdk_bdev_malloc.a 00:02:45.577 SO libspdk_bdev_passthru.so.5.0 00:02:45.577 SYMLINK libspdk_bdev_split.so 00:02:45.577 LIB libspdk_bdev_zone_block.a 00:02:45.577 SO libspdk_bdev_aio.so.5.0 00:02:45.577 SO libspdk_bdev_ftl.so.5.0 00:02:45.577 SO libspdk_bdev_gpt.so.5.0 00:02:45.577 SO libspdk_bdev_malloc.so.5.0 00:02:45.577 SO libspdk_bdev_zone_block.so.5.0 00:02:45.577 LIB libspdk_bdev_delay.a 00:02:45.577 LIB libspdk_bdev_iscsi.a 00:02:45.577 SYMLINK libspdk_bdev_null.so 00:02:45.577 SYMLINK libspdk_bdev_error.so 00:02:45.577 SYMLINK libspdk_bdev_passthru.so 00:02:45.577 SYMLINK libspdk_bdev_aio.so 00:02:45.577 SYMLINK libspdk_bdev_ftl.so 00:02:45.577 SYMLINK libspdk_bdev_gpt.so 00:02:45.577 SO libspdk_bdev_iscsi.so.5.0 00:02:45.577 SO libspdk_bdev_delay.so.5.0 00:02:45.838 SYMLINK libspdk_bdev_malloc.so 00:02:45.838 SYMLINK libspdk_bdev_zone_block.so 00:02:45.838 LIB libspdk_bdev_lvol.a 00:02:45.838 SYMLINK libspdk_bdev_iscsi.so 00:02:45.838 SYMLINK libspdk_bdev_delay.so 00:02:45.838 LIB libspdk_bdev_virtio.a 00:02:45.838 SO libspdk_bdev_lvol.so.5.0 00:02:45.838 SO libspdk_bdev_virtio.so.5.0 00:02:45.838 SYMLINK libspdk_bdev_lvol.so 00:02:45.838 SYMLINK libspdk_bdev_virtio.so 00:02:46.100 LIB libspdk_bdev_raid.a 00:02:46.100 SO libspdk_bdev_raid.so.5.0 00:02:46.100 SYMLINK libspdk_bdev_raid.so 00:02:47.044 LIB libspdk_bdev_nvme.a 00:02:47.044 SO libspdk_bdev_nvme.so.6.0 00:02:47.305 SYMLINK libspdk_bdev_nvme.so 00:02:47.567 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.567 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.829 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.829 CC module/event/subsystems/sock/sock.o 00:02:47.829 CC module/event/subsystems/vmd/vmd.o 00:02:47.829 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.829 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.829 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:47.829 LIB libspdk_event_scheduler.a 00:02:47.829 LIB libspdk_event_vhost_blk.a 00:02:47.829 LIB libspdk_event_iobuf.a 00:02:47.829 LIB libspdk_event_sock.a 00:02:47.829 LIB libspdk_event_vmd.a 00:02:47.829 LIB libspdk_event_vfu_tgt.a 00:02:47.829 SO libspdk_event_vhost_blk.so.2.0 00:02:47.829 SO libspdk_event_scheduler.so.3.0 00:02:47.829 SO libspdk_event_iobuf.so.2.0 00:02:47.829 SO libspdk_event_sock.so.4.0 00:02:47.829 SO libspdk_event_vfu_tgt.so.2.0 00:02:47.829 SO libspdk_event_vmd.so.5.0 00:02:48.090 SYMLINK libspdk_event_vhost_blk.so 00:02:48.091 SYMLINK libspdk_event_scheduler.so 00:02:48.091 SYMLINK libspdk_event_sock.so 00:02:48.091 SYMLINK libspdk_event_iobuf.so 00:02:48.091 SYMLINK libspdk_event_vfu_tgt.so 00:02:48.091 SYMLINK libspdk_event_vmd.so 00:02:48.091 CC module/event/subsystems/accel/accel.o 00:02:48.353 LIB libspdk_event_accel.a 00:02:48.353 SO libspdk_event_accel.so.5.0 00:02:48.353 SYMLINK libspdk_event_accel.so 00:02:48.613 CC module/event/subsystems/bdev/bdev.o 00:02:48.874 LIB libspdk_event_bdev.a 00:02:48.874 SO libspdk_event_bdev.so.5.0 00:02:48.874 SYMLINK libspdk_event_bdev.so 00:02:49.136 CC module/event/subsystems/nbd/nbd.o 00:02:49.136 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.136 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.136 CC module/event/subsystems/scsi/scsi.o 00:02:49.136 CC module/event/subsystems/ublk/ublk.o 00:02:49.398 LIB libspdk_event_nbd.a 00:02:49.398 LIB libspdk_event_ublk.a 00:02:49.398 LIB libspdk_event_scsi.a 00:02:49.398 SO libspdk_event_nbd.so.5.0 00:02:49.398 SO libspdk_event_ublk.so.2.0 00:02:49.398 LIB libspdk_event_nvmf.a 00:02:49.398 SO libspdk_event_scsi.so.5.0 00:02:49.398 SYMLINK libspdk_event_nbd.so 00:02:49.398 SO libspdk_event_nvmf.so.5.0 00:02:49.398 SYMLINK libspdk_event_ublk.so 00:02:49.398 SYMLINK libspdk_event_scsi.so 00:02:49.398 SYMLINK libspdk_event_nvmf.so 00:02:49.659 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.659 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:49.921 LIB libspdk_event_vhost_scsi.a 00:02:49.921 LIB libspdk_event_iscsi.a 00:02:49.921 SO libspdk_event_vhost_scsi.so.2.0 00:02:49.921 SO libspdk_event_iscsi.so.5.0 00:02:49.921 SYMLINK libspdk_event_vhost_scsi.so 00:02:49.921 SYMLINK libspdk_event_iscsi.so 00:02:50.184 SO libspdk.so.5.0 00:02:50.184 SYMLINK libspdk.so 00:02:50.445 CC app/trace_record/trace_record.o 00:02:50.445 CC app/spdk_nvme_perf/perf.o 00:02:50.445 CXX app/trace/trace.o 00:02:50.445 TEST_HEADER include/spdk/accel.h 00:02:50.445 TEST_HEADER include/spdk/barrier.h 00:02:50.445 CC app/spdk_lspci/spdk_lspci.o 00:02:50.445 TEST_HEADER include/spdk/accel_module.h 00:02:50.445 TEST_HEADER include/spdk/assert.h 00:02:50.445 CC app/spdk_top/spdk_top.o 00:02:50.445 TEST_HEADER include/spdk/base64.h 00:02:50.445 TEST_HEADER include/spdk/bdev_module.h 00:02:50.445 TEST_HEADER include/spdk/bdev.h 00:02:50.445 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.445 TEST_HEADER include/spdk/bit_array.h 00:02:50.445 TEST_HEADER include/spdk/bit_pool.h 00:02:50.445 CC test/rpc_client/rpc_client_test.o 00:02:50.445 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.445 TEST_HEADER include/spdk/blobfs.h 00:02:50.445 TEST_HEADER include/spdk/conf.h 00:02:50.445 CC app/spdk_nvme_discover/discovery_aer.o 00:02:50.445 TEST_HEADER include/spdk/blob.h 00:02:50.445 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.445 TEST_HEADER include/spdk/config.h 00:02:50.445 TEST_HEADER include/spdk/cpuset.h 00:02:50.445 TEST_HEADER include/spdk/crc16.h 00:02:50.445 CC app/spdk_nvme_identify/identify.o 00:02:50.445 TEST_HEADER include/spdk/crc32.h 00:02:50.445 TEST_HEADER include/spdk/crc64.h 00:02:50.445 TEST_HEADER include/spdk/dif.h 00:02:50.445 TEST_HEADER include/spdk/dma.h 00:02:50.445 TEST_HEADER include/spdk/endian.h 00:02:50.445 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.445 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.445 TEST_HEADER include/spdk/env.h 00:02:50.445 TEST_HEADER include/spdk/event.h 00:02:50.445 TEST_HEADER include/spdk/fd_group.h 00:02:50.445 TEST_HEADER include/spdk/fd.h 00:02:50.445 CC app/iscsi_tgt/iscsi_tgt.o 00:02:50.445 CC app/spdk_dd/spdk_dd.o 00:02:50.445 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.445 TEST_HEADER include/spdk/hexlify.h 00:02:50.445 TEST_HEADER include/spdk/ftl.h 00:02:50.445 TEST_HEADER include/spdk/file.h 00:02:50.445 TEST_HEADER include/spdk/idxd.h 00:02:50.445 TEST_HEADER include/spdk/init.h 00:02:50.445 TEST_HEADER include/spdk/histogram_data.h 00:02:50.445 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.445 CC app/nvmf_tgt/nvmf_main.o 00:02:50.445 TEST_HEADER include/spdk/ioat.h 00:02:50.445 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.445 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.445 TEST_HEADER include/spdk/json.h 00:02:50.445 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.445 TEST_HEADER include/spdk/likely.h 00:02:50.445 TEST_HEADER include/spdk/log.h 00:02:50.445 TEST_HEADER include/spdk/memory.h 00:02:50.445 CC app/vhost/vhost.o 00:02:50.445 TEST_HEADER include/spdk/lvol.h 00:02:50.445 TEST_HEADER include/spdk/nbd.h 00:02:50.445 TEST_HEADER include/spdk/notify.h 00:02:50.445 TEST_HEADER include/spdk/nvme.h 00:02:50.445 TEST_HEADER include/spdk/mmio.h 00:02:50.445 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.445 CC app/spdk_tgt/spdk_tgt.o 00:02:50.445 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.445 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.445 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.445 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.445 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.445 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.445 TEST_HEADER include/spdk/nvmf.h 00:02:50.445 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.445 TEST_HEADER include/spdk/opal.h 00:02:50.445 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.445 TEST_HEADER include/spdk/opal_spec.h 00:02:50.445 TEST_HEADER include/spdk/pci_ids.h 00:02:50.445 TEST_HEADER include/spdk/pipe.h 00:02:50.445 TEST_HEADER include/spdk/queue.h 00:02:50.445 TEST_HEADER include/spdk/reduce.h 00:02:50.445 TEST_HEADER include/spdk/rpc.h 00:02:50.445 TEST_HEADER include/spdk/scsi.h 00:02:50.445 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.445 TEST_HEADER include/spdk/scheduler.h 00:02:50.445 TEST_HEADER include/spdk/sock.h 00:02:50.445 TEST_HEADER include/spdk/stdinc.h 00:02:50.445 TEST_HEADER include/spdk/string.h 00:02:50.445 TEST_HEADER include/spdk/thread.h 00:02:50.445 TEST_HEADER include/spdk/trace.h 00:02:50.445 TEST_HEADER include/spdk/tree.h 00:02:50.445 TEST_HEADER include/spdk/trace_parser.h 00:02:50.445 TEST_HEADER include/spdk/ublk.h 00:02:50.445 TEST_HEADER include/spdk/util.h 00:02:50.445 TEST_HEADER include/spdk/version.h 00:02:50.445 TEST_HEADER include/spdk/uuid.h 00:02:50.445 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.445 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.445 TEST_HEADER include/spdk/xor.h 00:02:50.445 TEST_HEADER include/spdk/vmd.h 00:02:50.445 TEST_HEADER include/spdk/zipf.h 00:02:50.445 CXX test/cpp_headers/accel.o 00:02:50.445 TEST_HEADER include/spdk/vhost.h 00:02:50.445 CXX test/cpp_headers/accel_module.o 00:02:50.445 CXX test/cpp_headers/assert.o 00:02:50.445 CXX test/cpp_headers/barrier.o 00:02:50.445 CXX test/cpp_headers/base64.o 00:02:50.445 CXX test/cpp_headers/bdev.o 00:02:50.445 CXX test/cpp_headers/bdev_module.o 00:02:50.445 CXX test/cpp_headers/bit_pool.o 00:02:50.445 CXX test/cpp_headers/bdev_zone.o 00:02:50.445 CXX test/cpp_headers/bit_array.o 00:02:50.445 CXX test/cpp_headers/blobfs_bdev.o 00:02:50.445 CXX test/cpp_headers/blob_bdev.o 00:02:50.445 CXX test/cpp_headers/blobfs.o 00:02:50.445 CXX test/cpp_headers/conf.o 00:02:50.445 CXX test/cpp_headers/blob.o 00:02:50.735 CXX test/cpp_headers/config.o 00:02:50.735 CXX test/cpp_headers/crc32.o 00:02:50.735 CXX test/cpp_headers/cpuset.o 00:02:50.735 CXX test/cpp_headers/crc16.o 00:02:50.735 CXX test/cpp_headers/crc64.o 00:02:50.735 CXX test/cpp_headers/dma.o 00:02:50.735 CC examples/vmd/led/led.o 00:02:50.735 CXX test/cpp_headers/dif.o 00:02:50.735 CXX test/cpp_headers/env.o 00:02:50.735 CXX test/cpp_headers/env_dpdk.o 00:02:50.735 CXX test/cpp_headers/endian.o 00:02:50.735 CXX test/cpp_headers/event.o 00:02:50.735 CC examples/util/zipf/zipf.o 00:02:50.735 CXX test/cpp_headers/fd.o 00:02:50.735 CXX test/cpp_headers/fd_group.o 00:02:50.735 CC test/app/jsoncat/jsoncat.o 00:02:50.735 CXX test/cpp_headers/ftl.o 00:02:50.735 CXX test/cpp_headers/file.o 00:02:50.735 CXX test/cpp_headers/gpt_spec.o 00:02:50.735 CXX test/cpp_headers/hexlify.o 00:02:50.735 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.735 CXX test/cpp_headers/histogram_data.o 00:02:50.735 CXX test/cpp_headers/idxd_spec.o 00:02:50.735 CC test/thread/poller_perf/poller_perf.o 00:02:50.735 CXX test/cpp_headers/idxd.o 00:02:50.735 CXX test/cpp_headers/init.o 00:02:50.735 CC test/app/stub/stub.o 00:02:50.735 CXX test/cpp_headers/ioat.o 00:02:50.735 CC examples/accel/perf/accel_perf.o 00:02:50.735 CXX test/cpp_headers/iscsi_spec.o 00:02:50.735 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.735 CXX test/cpp_headers/ioat_spec.o 00:02:50.735 CXX test/cpp_headers/json.o 00:02:50.735 CXX test/cpp_headers/jsonrpc.o 00:02:50.735 CC examples/ioat/verify/verify.o 00:02:50.735 CC examples/sock/hello_world/hello_sock.o 00:02:50.735 CXX test/cpp_headers/log.o 00:02:50.735 CXX test/cpp_headers/likely.o 00:02:50.735 CC test/event/reactor_perf/reactor_perf.o 00:02:50.735 CXX test/cpp_headers/lvol.o 00:02:50.735 CC test/event/event_perf/event_perf.o 00:02:50.735 CC examples/ioat/perf/perf.o 00:02:50.735 CXX test/cpp_headers/mmio.o 00:02:50.735 CXX test/cpp_headers/memory.o 00:02:50.735 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.735 CXX test/cpp_headers/notify.o 00:02:50.735 CXX test/cpp_headers/nbd.o 00:02:50.735 CXX test/cpp_headers/nvme_intel.o 00:02:50.735 CXX test/cpp_headers/nvme.o 00:02:50.735 CXX test/cpp_headers/nvme_ocssd.o 00:02:50.735 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:50.736 CC test/nvme/overhead/overhead.o 00:02:50.736 CXX test/cpp_headers/nvme_spec.o 00:02:50.736 CC examples/nvme/reconnect/reconnect.o 00:02:50.736 CXX test/cpp_headers/nvme_zns.o 00:02:50.736 CC examples/nvme/hotplug/hotplug.o 00:02:50.736 CC test/nvme/sgl/sgl.o 00:02:50.736 CXX test/cpp_headers/nvmf_cmd.o 00:02:50.736 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:50.736 CC test/event/reactor/reactor.o 00:02:50.736 CXX test/cpp_headers/nvmf.o 00:02:50.736 CXX test/cpp_headers/nvmf_spec.o 00:02:50.736 CC test/nvme/startup/startup.o 00:02:50.736 CC test/app/histogram_perf/histogram_perf.o 00:02:50.736 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:50.736 CXX test/cpp_headers/opal.o 00:02:50.736 CC test/nvme/e2edp/nvme_dp.o 00:02:50.736 CXX test/cpp_headers/nvmf_transport.o 00:02:50.736 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.736 CC examples/nvme/arbitration/arbitration.o 00:02:50.736 CXX test/cpp_headers/pci_ids.o 00:02:50.736 CXX test/cpp_headers/opal_spec.o 00:02:50.736 CC examples/nvme/abort/abort.o 00:02:50.736 CC test/nvme/fdp/fdp.o 00:02:50.736 CC test/nvme/aer/aer.o 00:02:50.736 CC test/env/vtophys/vtophys.o 00:02:50.736 CC examples/idxd/perf/perf.o 00:02:50.736 CC test/env/memory/memory_ut.o 00:02:50.736 CC test/env/pci/pci_ut.o 00:02:50.736 CC app/fio/nvme/fio_plugin.o 00:02:50.736 CC test/nvme/reset/reset.o 00:02:50.736 CXX test/cpp_headers/pipe.o 00:02:50.736 CC test/event/app_repeat/app_repeat.o 00:02:50.736 CC test/nvme/err_injection/err_injection.o 00:02:50.736 CXX test/cpp_headers/queue.o 00:02:50.736 CXX test/cpp_headers/reduce.o 00:02:50.736 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.736 CC examples/blob/cli/blobcli.o 00:02:50.736 CXX test/cpp_headers/rpc.o 00:02:50.736 CC test/nvme/compliance/nvme_compliance.o 00:02:50.736 CC test/nvme/connect_stress/connect_stress.o 00:02:50.736 CC test/nvme/reserve/reserve.o 00:02:50.736 CC test/nvme/fused_ordering/fused_ordering.o 00:02:50.736 CXX test/cpp_headers/scheduler.o 00:02:50.736 CC test/accel/dif/dif.o 00:02:50.736 CC test/nvme/cuse/cuse.o 00:02:50.736 CC examples/nvme/hello_world/hello_world.o 00:02:50.736 CC test/nvme/boot_partition/boot_partition.o 00:02:50.736 CC examples/blob/hello_world/hello_blob.o 00:02:50.736 CXX test/cpp_headers/scsi.o 00:02:50.736 CC examples/thread/thread/thread_ex.o 00:02:50.736 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.736 CC test/blobfs/mkfs/mkfs.o 00:02:50.736 CC test/nvme/simple_copy/simple_copy.o 00:02:50.736 CC test/event/scheduler/scheduler.o 00:02:50.736 CC test/bdev/bdevio/bdevio.o 00:02:50.736 CC test/app/bdev_svc/bdev_svc.o 00:02:50.736 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.736 CC examples/nvmf/nvmf/nvmf.o 00:02:50.736 CXX test/cpp_headers/scsi_spec.o 00:02:50.736 CC test/dma/test_dma/test_dma.o 00:02:50.736 CC app/fio/bdev/fio_plugin.o 00:02:50.736 CXX test/cpp_headers/sock.o 00:02:50.736 LINK spdk_lspci 00:02:51.029 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.029 CC test/lvol/esnap/esnap.o 00:02:51.029 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:51.029 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.029 LINK nvmf_tgt 00:02:51.029 LINK rpc_client_test 00:02:51.029 LINK spdk_nvme_discover 00:02:51.029 LINK interrupt_tgt 00:02:51.029 LINK spdk_trace_record 00:02:51.029 LINK vhost 00:02:51.396 LINK spdk_tgt 00:02:51.396 LINK led 00:02:51.396 LINK poller_perf 00:02:51.396 LINK iscsi_tgt 00:02:51.396 LINK reactor_perf 00:02:51.396 LINK jsoncat 00:02:51.396 LINK lsvmd 00:02:51.396 LINK env_dpdk_post_init 00:02:51.396 LINK app_repeat 00:02:51.396 LINK reactor 00:02:51.396 LINK vtophys 00:02:51.396 LINK zipf 00:02:51.396 LINK event_perf 00:02:51.396 LINK pmr_persistence 00:02:51.396 LINK startup 00:02:51.396 LINK boot_partition 00:02:51.396 LINK histogram_perf 00:02:51.396 LINK ioat_perf 00:02:51.396 LINK doorbell_aers 00:02:51.396 LINK stub 00:02:51.396 LINK connect_stress 00:02:51.396 LINK cmb_copy 00:02:51.396 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.396 LINK err_injection 00:02:51.396 LINK verify 00:02:51.396 LINK hello_sock 00:02:51.396 LINK hotplug 00:02:51.396 LINK fused_ordering 00:02:51.396 CXX test/cpp_headers/stdinc.o 00:02:51.396 LINK simple_copy 00:02:51.396 CXX test/cpp_headers/string.o 00:02:51.396 CXX test/cpp_headers/thread.o 00:02:51.396 LINK hello_world 00:02:51.396 CXX test/cpp_headers/trace.o 00:02:51.396 CXX test/cpp_headers/trace_parser.o 00:02:51.396 LINK mkfs 00:02:51.396 LINK reserve 00:02:51.396 CXX test/cpp_headers/tree.o 00:02:51.396 LINK bdev_svc 00:02:51.396 CXX test/cpp_headers/ublk.o 00:02:51.396 CXX test/cpp_headers/util.o 00:02:51.396 CXX test/cpp_headers/uuid.o 00:02:51.396 CXX test/cpp_headers/version.o 00:02:51.396 CXX test/cpp_headers/vfio_user_pci.o 00:02:51.396 CXX test/cpp_headers/vfio_user_spec.o 00:02:51.396 CXX test/cpp_headers/vhost.o 00:02:51.396 LINK scheduler 00:02:51.396 CXX test/cpp_headers/vmd.o 00:02:51.396 CXX test/cpp_headers/xor.o 00:02:51.396 CXX test/cpp_headers/zipf.o 00:02:51.396 LINK spdk_dd 00:02:51.396 LINK reset 00:02:51.396 LINK hello_blob 00:02:51.396 LINK hello_bdev 00:02:51.396 LINK sgl 00:02:51.396 LINK overhead 00:02:51.396 LINK thread 00:02:51.396 LINK nvme_dp 00:02:51.396 LINK aer 00:02:51.396 LINK arbitration 00:02:51.396 LINK nvmf 00:02:51.396 LINK fdp 00:02:51.396 LINK spdk_trace 00:02:51.656 LINK idxd_perf 00:02:51.656 LINK reconnect 00:02:51.656 LINK nvme_compliance 00:02:51.656 LINK abort 00:02:51.656 LINK bdevio 00:02:51.656 LINK test_dma 00:02:51.656 LINK dif 00:02:51.656 LINK pci_ut 00:02:51.656 LINK accel_perf 00:02:51.656 LINK blobcli 00:02:51.656 LINK spdk_nvme 00:02:51.656 LINK nvme_manage 00:02:51.656 LINK nvme_fuzz 00:02:51.917 LINK vhost_fuzz 00:02:51.917 LINK spdk_bdev 00:02:51.917 LINK spdk_nvme_perf 00:02:51.917 LINK spdk_nvme_identify 00:02:51.917 LINK mem_callbacks 00:02:51.917 LINK bdevperf 00:02:51.917 LINK memory_ut 00:02:51.917 LINK spdk_top 00:02:52.178 LINK cuse 00:02:52.752 LINK iscsi_fuzz 00:02:55.299 LINK esnap 00:02:55.299 00:02:55.299 real 0m33.047s 00:02:55.299 user 5m7.680s 00:02:55.299 sys 2m58.662s 00:02:55.299 13:13:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:55.299 13:13:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.299 ************************************ 00:02:55.299 END TEST make 00:02:55.299 ************************************ 00:02:55.561 13:13:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.561 13:13:52 -- nvmf/common.sh@7 -- # uname -s 00:02:55.561 13:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.561 13:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.561 13:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.561 13:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.561 13:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.561 13:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.561 13:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.561 13:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.561 13:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.561 13:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.561 13:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:55.561 13:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:55.561 13:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.561 13:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.561 13:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.561 13:13:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:55.561 13:13:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.561 13:13:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.561 13:13:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.561 13:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.561 13:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.561 13:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.561 13:13:52 -- paths/export.sh@5 -- # export PATH 00:02:55.561 13:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.561 13:13:52 -- nvmf/common.sh@46 -- # : 0 00:02:55.561 13:13:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:55.561 13:13:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:55.561 13:13:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:55.561 13:13:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.561 13:13:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.561 13:13:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:55.561 13:13:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:55.561 13:13:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:55.561 13:13:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.561 13:13:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.561 13:13:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.562 13:13:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.562 13:13:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.562 13:13:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.562 13:13:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.562 13:13:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.562 13:13:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.562 13:13:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.562 13:13:52 -- spdk/autotest.sh@48 -- # udevadm_pid=688140 00:02:55.562 13:13:52 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:55.562 13:13:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.562 13:13:52 -- spdk/autotest.sh@54 -- # echo 688142 00:02:55.562 13:13:52 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:55.562 13:13:52 -- spdk/autotest.sh@56 -- # echo 688143 00:02:55.562 13:13:52 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:55.562 13:13:52 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:55.562 13:13:52 -- spdk/autotest.sh@60 -- # echo 688144 00:02:55.562 13:13:52 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:55.562 13:13:52 -- spdk/autotest.sh@62 -- # echo 688145 00:02:55.562 13:13:52 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:55.562 13:13:52 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:55.562 13:13:52 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:55.562 13:13:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:55.562 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:02:55.562 13:13:52 -- spdk/autotest.sh@70 -- # create_test_list 00:02:55.562 13:13:52 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:55.562 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:02:55.562 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:55.562 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:55.562 13:13:52 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:55.562 13:13:52 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:55.562 13:13:52 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:55.562 13:13:52 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:55.562 13:13:52 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:55.562 13:13:52 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:55.562 13:13:52 -- common/autotest_common.sh@1440 -- # uname 00:02:55.562 13:13:52 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:55.562 13:13:52 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:55.562 13:13:52 -- common/autotest_common.sh@1460 -- # uname 00:02:55.562 13:13:52 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:55.562 13:13:52 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:55.562 13:13:52 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:55.562 13:13:52 -- spdk/autotest.sh@83 -- # hash lcov 00:02:55.562 13:13:52 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:55.562 13:13:52 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:55.562 --rc lcov_branch_coverage=1 00:02:55.562 --rc lcov_function_coverage=1 00:02:55.562 --rc genhtml_branch_coverage=1 00:02:55.562 --rc genhtml_function_coverage=1 00:02:55.562 --rc genhtml_legend=1 00:02:55.562 --rc geninfo_all_blocks=1 00:02:55.562 ' 00:02:55.562 13:13:52 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:55.562 --rc lcov_branch_coverage=1 00:02:55.562 --rc lcov_function_coverage=1 00:02:55.562 --rc genhtml_branch_coverage=1 00:02:55.562 --rc genhtml_function_coverage=1 00:02:55.562 --rc genhtml_legend=1 00:02:55.562 --rc geninfo_all_blocks=1 00:02:55.562 ' 00:02:55.562 13:13:52 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:55.562 --rc lcov_branch_coverage=1 00:02:55.562 --rc lcov_function_coverage=1 00:02:55.562 --rc genhtml_branch_coverage=1 00:02:55.562 --rc genhtml_function_coverage=1 00:02:55.562 --rc genhtml_legend=1 00:02:55.562 --rc geninfo_all_blocks=1 00:02:55.562 --no-external' 00:02:55.562 13:13:52 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:55.562 --rc lcov_branch_coverage=1 00:02:55.562 --rc lcov_function_coverage=1 00:02:55.562 --rc genhtml_branch_coverage=1 00:02:55.562 --rc genhtml_function_coverage=1 00:02:55.562 --rc genhtml_legend=1 00:02:55.562 --rc geninfo_all_blocks=1 00:02:55.562 --no-external' 00:02:55.562 13:13:52 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:55.562 lcov: LCOV version 1.14 00:02:55.562 13:13:52 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:58.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:58.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:58.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:58.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:58.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:58.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:20.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:20.847 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:20.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:20.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:20.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:20.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:20.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:20.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:20.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:20.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:20.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:20.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:20.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:23.397 13:14:20 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:23.397 13:14:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:23.397 13:14:20 -- common/autotest_common.sh@10 -- # set +x 00:03:23.397 13:14:20 -- spdk/autotest.sh@102 -- # rm -f 00:03:23.397 13:14:20 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.699 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:26.699 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:26.699 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:26.699 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:26.699 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:26.699 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:26.960 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:26.960 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.221 13:14:24 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:27.221 13:14:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:27.221 13:14:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:27.221 13:14:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:27.221 13:14:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.221 13:14:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:27.221 13:14:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:27.221 13:14:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.221 13:14:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.221 13:14:24 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:27.221 13:14:24 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:27.221 13:14:24 -- spdk/autotest.sh@121 -- # grep -v p 00:03:27.221 13:14:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:27.221 13:14:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:27.221 13:14:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:27.221 13:14:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:27.221 13:14:24 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.482 No valid GPT data, bailing 00:03:27.482 13:14:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.482 13:14:24 -- scripts/common.sh@393 -- # pt= 00:03:27.482 13:14:24 -- scripts/common.sh@394 -- # return 1 00:03:27.482 13:14:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.482 1+0 records in 00:03:27.482 1+0 records out 00:03:27.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00132761 s, 790 MB/s 00:03:27.482 13:14:24 -- spdk/autotest.sh@129 -- # sync 00:03:27.482 13:14:24 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.482 13:14:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.482 13:14:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:35.674 13:14:32 -- spdk/autotest.sh@135 -- # uname -s 00:03:35.674 13:14:32 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:35.674 13:14:32 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.674 13:14:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.674 13:14:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.674 13:14:32 -- common/autotest_common.sh@10 -- # set +x 00:03:35.674 ************************************ 00:03:35.674 START TEST setup.sh 00:03:35.674 ************************************ 00:03:35.674 13:14:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.674 * Looking for test storage... 00:03:35.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.674 13:14:32 -- setup/test-setup.sh@10 -- # uname -s 00:03:35.674 13:14:32 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:35.674 13:14:32 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:35.674 13:14:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.674 13:14:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.674 13:14:32 -- common/autotest_common.sh@10 -- # set +x 00:03:35.674 ************************************ 00:03:35.674 START TEST acl 00:03:35.674 ************************************ 00:03:35.675 13:14:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:35.675 * Looking for test storage... 00:03:35.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:35.675 13:14:32 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.675 13:14:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:35.675 13:14:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:35.675 13:14:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:35.675 13:14:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:35.675 13:14:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:35.675 13:14:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:35.675 13:14:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.675 13:14:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:35.675 13:14:32 -- setup/acl.sh@12 -- # devs=() 00:03:35.675 13:14:32 -- setup/acl.sh@12 -- # declare -a devs 00:03:35.675 13:14:32 -- setup/acl.sh@13 -- # drivers=() 00:03:35.675 13:14:32 -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.675 13:14:32 -- setup/acl.sh@51 -- # setup reset 00:03:35.675 13:14:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.675 13:14:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.888 13:14:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.888 13:14:36 -- setup/acl.sh@16 -- # local dev driver 00:03:39.888 13:14:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.888 13:14:36 -- setup/acl.sh@15 -- # setup output status 00:03:39.888 13:14:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.888 13:14:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:42.554 Hugepages 00:03:42.555 node hugesize free / total 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # continue 00:03:42.555 13:14:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # continue 00:03:42.555 13:14:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.555 13:14:39 -- setup/acl.sh@19 -- # continue 00:03:42.555 13:14:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 00:03:42.555 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.555 13:14:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.555 13:14:40 -- setup/acl.sh@19 -- # continue 00:03:42.555 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.555 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.555 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.555 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.555 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.555 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.817 13:14:40 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.817 13:14:40 -- setup/acl.sh@20 -- # continue 00:03:42.817 13:14:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.817 13:14:40 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:42.817 13:14:40 -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.817 13:14:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.817 13:14:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.817 13:14:40 -- common/autotest_common.sh@10 -- # set +x 00:03:42.817 ************************************ 00:03:42.817 START TEST denied 00:03:42.817 ************************************ 00:03:42.817 13:14:40 -- common/autotest_common.sh@1104 -- # denied 00:03:42.817 13:14:40 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:42.817 13:14:40 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:42.817 13:14:40 -- setup/acl.sh@38 -- # setup output config 00:03:42.817 13:14:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.817 13:14:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.028 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:47.028 13:14:44 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:47.028 13:14:44 -- setup/acl.sh@28 -- # local dev driver 00:03:47.028 13:14:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.028 13:14:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:47.028 13:14:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:47.028 13:14:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.028 13:14:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.028 13:14:44 -- setup/acl.sh@41 -- # setup reset 00:03:47.028 13:14:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.028 13:14:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.321 00:03:52.321 real 0m8.721s 00:03:52.321 user 0m2.919s 00:03:52.321 sys 0m5.084s 00:03:52.321 13:14:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.321 13:14:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.321 ************************************ 00:03:52.321 END TEST denied 00:03:52.321 ************************************ 00:03:52.321 13:14:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:52.321 13:14:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.321 13:14:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.321 13:14:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.321 ************************************ 00:03:52.321 START TEST allowed 00:03:52.321 ************************************ 00:03:52.321 13:14:48 -- common/autotest_common.sh@1104 -- # allowed 00:03:52.321 13:14:48 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:52.321 13:14:48 -- setup/acl.sh@45 -- # setup output config 00:03:52.321 13:14:48 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:52.321 13:14:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.321 13:14:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.616 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:57.617 13:14:54 -- setup/acl.sh@47 -- # verify 00:03:57.617 13:14:54 -- setup/acl.sh@28 -- # local dev driver 00:03:57.617 13:14:54 -- setup/acl.sh@48 -- # setup reset 00:03:57.617 13:14:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.617 13:14:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.891 00:04:01.891 real 0m9.622s 00:04:01.891 user 0m2.861s 00:04:01.891 sys 0m5.025s 00:04:01.891 13:14:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.891 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:04:01.891 ************************************ 00:04:01.891 END TEST allowed 00:04:01.891 ************************************ 00:04:01.891 00:04:01.891 real 0m25.949s 00:04:01.891 user 0m8.555s 00:04:01.891 sys 0m15.142s 00:04:01.891 13:14:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.891 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:04:01.891 ************************************ 00:04:01.891 END TEST acl 00:04:01.891 ************************************ 00:04:01.891 13:14:58 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.891 13:14:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.891 13:14:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.891 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:04:01.891 ************************************ 00:04:01.891 START TEST hugepages 00:04:01.891 ************************************ 00:04:01.891 13:14:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.891 * Looking for test storage... 00:04:01.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:01.891 13:14:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.891 13:14:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.891 13:14:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.891 13:14:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.891 13:14:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.891 13:14:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.891 13:14:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.891 13:14:58 -- setup/common.sh@18 -- # local node= 00:04:01.891 13:14:58 -- setup/common.sh@19 -- # local var val 00:04:01.891 13:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.891 13:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.891 13:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.891 13:14:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.891 13:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.891 13:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.891 13:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 100875620 kB' 'MemAvailable: 104606928 kB' 'Buffers: 2704 kB' 'Cached: 16246864 kB' 'SwapCached: 0 kB' 'Active: 13098644 kB' 'Inactive: 3693560 kB' 'Active(anon): 12618844 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546060 kB' 'Mapped: 176872 kB' 'Shmem: 12076208 kB' 'KReclaimable: 598892 kB' 'Slab: 1490500 kB' 'SReclaimable: 598892 kB' 'SUnreclaim: 891608 kB' 'KernelStack: 27232 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 14201560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.891 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.891 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.892 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.892 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # continue 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.893 13:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.893 13:14:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.893 13:14:58 -- setup/common.sh@33 -- # echo 2048 00:04:01.893 13:14:58 -- setup/common.sh@33 -- # return 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.893 13:14:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.893 13:14:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.893 13:14:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.893 13:14:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.893 13:14:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.893 13:14:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.893 13:14:58 -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.893 13:14:58 -- setup/hugepages.sh@27 -- # local node 00:04:01.893 13:14:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.893 13:14:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.893 13:14:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.893 13:14:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.893 13:14:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.893 13:14:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.893 13:14:58 -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.893 13:14:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:01.893 13:14:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.893 13:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.893 13:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.893 13:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.893 13:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.893 13:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.893 13:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.893 13:14:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.893 13:14:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.893 13:14:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.893 13:14:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.893 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:04:01.893 ************************************ 00:04:01.893 START TEST default_setup 00:04:01.893 ************************************ 00:04:01.893 13:14:58 -- common/autotest_common.sh@1104 -- # default_setup 00:04:01.893 13:14:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.893 13:14:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.893 13:14:58 -- setup/hugepages.sh@51 -- # shift 00:04:01.893 13:14:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.893 13:14:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.893 13:14:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.893 13:14:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.893 13:14:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.893 13:14:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.893 13:14:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.893 13:14:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.893 13:14:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.893 13:14:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.893 13:14:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.893 13:14:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.893 13:14:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.893 13:14:58 -- setup/hugepages.sh@73 -- # return 0 00:04:01.893 13:14:58 -- setup/hugepages.sh@137 -- # setup output 00:04:01.893 13:14:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.893 13:14:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.200 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.200 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:05.466 13:15:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:05.466 13:15:02 -- setup/hugepages.sh@89 -- # local node 00:04:05.466 13:15:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.466 13:15:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.466 13:15:02 -- setup/hugepages.sh@92 -- # local surp 00:04:05.466 13:15:02 -- setup/hugepages.sh@93 -- # local resv 00:04:05.466 13:15:02 -- setup/hugepages.sh@94 -- # local anon 00:04:05.466 13:15:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.466 13:15:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.466 13:15:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.466 13:15:02 -- setup/common.sh@18 -- # local node= 00:04:05.466 13:15:02 -- setup/common.sh@19 -- # local var val 00:04:05.466 13:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.466 13:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.466 13:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.466 13:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.466 13:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.466 13:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102968492 kB' 'MemAvailable: 106699832 kB' 'Buffers: 2704 kB' 'Cached: 16246988 kB' 'SwapCached: 0 kB' 'Active: 13122852 kB' 'Inactive: 3693560 kB' 'Active(anon): 12643052 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569740 kB' 'Mapped: 178048 kB' 'Shmem: 12076332 kB' 'KReclaimable: 598924 kB' 'Slab: 1488588 kB' 'SReclaimable: 598924 kB' 'SUnreclaim: 889664 kB' 'KernelStack: 27392 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14228624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235964 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.466 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.466 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.467 13:15:02 -- setup/common.sh@33 -- # echo 0 00:04:05.467 13:15:02 -- setup/common.sh@33 -- # return 0 00:04:05.467 13:15:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.467 13:15:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.467 13:15:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.467 13:15:02 -- setup/common.sh@18 -- # local node= 00:04:05.467 13:15:02 -- setup/common.sh@19 -- # local var val 00:04:05.467 13:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.467 13:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.467 13:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.467 13:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.467 13:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.467 13:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102969660 kB' 'MemAvailable: 106701000 kB' 'Buffers: 2704 kB' 'Cached: 16246992 kB' 'SwapCached: 0 kB' 'Active: 13122768 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642968 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569588 kB' 'Mapped: 177988 kB' 'Shmem: 12076336 kB' 'KReclaimable: 598924 kB' 'Slab: 1488584 kB' 'SReclaimable: 598924 kB' 'SUnreclaim: 889660 kB' 'KernelStack: 27376 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14226984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.468 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.469 13:15:02 -- setup/common.sh@33 -- # echo 0 00:04:05.469 13:15:02 -- setup/common.sh@33 -- # return 0 00:04:05.469 13:15:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.469 13:15:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.469 13:15:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.469 13:15:02 -- setup/common.sh@18 -- # local node= 00:04:05.469 13:15:02 -- setup/common.sh@19 -- # local var val 00:04:05.469 13:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.469 13:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.469 13:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.469 13:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.469 13:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.469 13:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102969844 kB' 'MemAvailable: 106701184 kB' 'Buffers: 2704 kB' 'Cached: 16247004 kB' 'SwapCached: 0 kB' 'Active: 13122224 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642424 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569504 kB' 'Mapped: 177912 kB' 'Shmem: 12076348 kB' 'KReclaimable: 598924 kB' 'Slab: 1488608 kB' 'SReclaimable: 598924 kB' 'SUnreclaim: 889684 kB' 'KernelStack: 27264 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14227168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.469 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.469 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.470 13:15:02 -- setup/common.sh@33 -- # echo 0 00:04:05.470 13:15:02 -- setup/common.sh@33 -- # return 0 00:04:05.470 13:15:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.470 13:15:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.470 nr_hugepages=1024 00:04:05.470 13:15:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.470 resv_hugepages=0 00:04:05.470 13:15:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.470 surplus_hugepages=0 00:04:05.470 13:15:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.470 anon_hugepages=0 00:04:05.470 13:15:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.470 13:15:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.470 13:15:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.470 13:15:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.470 13:15:02 -- setup/common.sh@18 -- # local node= 00:04:05.470 13:15:02 -- setup/common.sh@19 -- # local var val 00:04:05.470 13:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.470 13:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.470 13:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.470 13:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.470 13:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.470 13:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102970604 kB' 'MemAvailable: 106701944 kB' 'Buffers: 2704 kB' 'Cached: 16247004 kB' 'SwapCached: 0 kB' 'Active: 13121992 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642192 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569340 kB' 'Mapped: 177912 kB' 'Shmem: 12076348 kB' 'KReclaimable: 598924 kB' 'Slab: 1488608 kB' 'SReclaimable: 598924 kB' 'SUnreclaim: 889684 kB' 'KernelStack: 27344 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14227016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.470 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.470 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.471 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.471 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.472 13:15:02 -- setup/common.sh@33 -- # echo 1024 00:04:05.472 13:15:02 -- setup/common.sh@33 -- # return 0 00:04:05.472 13:15:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.472 13:15:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.472 13:15:02 -- setup/hugepages.sh@27 -- # local node 00:04:05.472 13:15:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.472 13:15:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.472 13:15:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.472 13:15:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.472 13:15:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.472 13:15:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.472 13:15:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.472 13:15:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.472 13:15:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.472 13:15:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.472 13:15:02 -- setup/common.sh@18 -- # local node=0 00:04:05.472 13:15:02 -- setup/common.sh@19 -- # local var val 00:04:05.472 13:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.472 13:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.472 13:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.472 13:15:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.472 13:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.472 13:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58303660 kB' 'MemUsed: 7355348 kB' 'SwapCached: 0 kB' 'Active: 2823280 kB' 'Inactive: 235936 kB' 'Active(anon): 2583856 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801204 kB' 'Mapped: 89056 kB' 'AnonPages: 261000 kB' 'Shmem: 2325844 kB' 'KernelStack: 15256 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273472 kB' 'Slab: 775588 kB' 'SReclaimable: 273472 kB' 'SUnreclaim: 502116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.472 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.472 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # continue 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.473 13:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.473 13:15:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.473 13:15:02 -- setup/common.sh@33 -- # echo 0 00:04:05.473 13:15:02 -- setup/common.sh@33 -- # return 0 00:04:05.473 13:15:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.473 13:15:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.473 13:15:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.473 13:15:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.473 13:15:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.473 node0=1024 expecting 1024 00:04:05.473 13:15:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.473 00:04:05.473 real 0m4.047s 00:04:05.473 user 0m1.590s 00:04:05.473 sys 0m2.475s 00:04:05.473 13:15:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.473 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:05.473 ************************************ 00:04:05.473 END TEST default_setup 00:04:05.473 ************************************ 00:04:05.473 13:15:02 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:05.473 13:15:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.473 13:15:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.473 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:05.473 ************************************ 00:04:05.473 START TEST per_node_1G_alloc 00:04:05.473 ************************************ 00:04:05.473 13:15:02 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:05.473 13:15:02 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:05.473 13:15:02 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:05.473 13:15:02 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.473 13:15:02 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:05.473 13:15:02 -- setup/hugepages.sh@51 -- # shift 00:04:05.473 13:15:02 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:05.473 13:15:02 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.473 13:15:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.473 13:15:02 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.473 13:15:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:05.473 13:15:02 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:05.473 13:15:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.473 13:15:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.473 13:15:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.473 13:15:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.473 13:15:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.473 13:15:02 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:05.473 13:15:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.473 13:15:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.473 13:15:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.473 13:15:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.473 13:15:02 -- setup/hugepages.sh@73 -- # return 0 00:04:05.473 13:15:02 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:05.473 13:15:02 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:05.473 13:15:02 -- setup/hugepages.sh@146 -- # setup output 00:04:05.473 13:15:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.473 13:15:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.720 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:09.720 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.720 13:15:06 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:09.720 13:15:06 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.720 13:15:06 -- setup/hugepages.sh@89 -- # local node 00:04:09.720 13:15:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.720 13:15:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.720 13:15:06 -- setup/hugepages.sh@92 -- # local surp 00:04:09.720 13:15:06 -- setup/hugepages.sh@93 -- # local resv 00:04:09.720 13:15:06 -- setup/hugepages.sh@94 -- # local anon 00:04:09.720 13:15:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.720 13:15:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.720 13:15:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.720 13:15:06 -- setup/common.sh@18 -- # local node= 00:04:09.720 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.720 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.720 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.720 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.720 13:15:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.720 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.720 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.720 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.720 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103033068 kB' 'MemAvailable: 106764400 kB' 'Buffers: 2704 kB' 'Cached: 16247132 kB' 'SwapCached: 0 kB' 'Active: 13122668 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642868 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569288 kB' 'Mapped: 176880 kB' 'Shmem: 12076476 kB' 'KReclaimable: 598916 kB' 'Slab: 1488492 kB' 'SReclaimable: 598916 kB' 'SUnreclaim: 889576 kB' 'KernelStack: 27440 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14221848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236156 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.721 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.721 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.722 13:15:06 -- setup/common.sh@33 -- # echo 0 00:04:09.722 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.722 13:15:06 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.722 13:15:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.722 13:15:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.722 13:15:06 -- setup/common.sh@18 -- # local node= 00:04:09.722 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.722 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.722 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.722 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.722 13:15:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.722 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.722 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103034240 kB' 'MemAvailable: 106765572 kB' 'Buffers: 2704 kB' 'Cached: 16247132 kB' 'SwapCached: 0 kB' 'Active: 13122120 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642320 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568692 kB' 'Mapped: 176908 kB' 'Shmem: 12076476 kB' 'KReclaimable: 598916 kB' 'Slab: 1488492 kB' 'SReclaimable: 598916 kB' 'SUnreclaim: 889576 kB' 'KernelStack: 27456 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14220212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236108 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.722 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.722 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.723 13:15:06 -- setup/common.sh@33 -- # echo 0 00:04:09.723 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.723 13:15:06 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.723 13:15:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.723 13:15:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.723 13:15:06 -- setup/common.sh@18 -- # local node= 00:04:09.723 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.723 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.723 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.723 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.723 13:15:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.723 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.723 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103032632 kB' 'MemAvailable: 106763964 kB' 'Buffers: 2704 kB' 'Cached: 16247144 kB' 'SwapCached: 0 kB' 'Active: 13122108 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642308 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569168 kB' 'Mapped: 176832 kB' 'Shmem: 12076488 kB' 'KReclaimable: 598916 kB' 'Slab: 1488484 kB' 'SReclaimable: 598916 kB' 'SUnreclaim: 889568 kB' 'KernelStack: 27440 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14221876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236124 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.723 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.723 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.724 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.724 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.725 13:15:06 -- setup/common.sh@33 -- # echo 0 00:04:09.725 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.725 13:15:06 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.725 13:15:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.725 nr_hugepages=1024 00:04:09.725 13:15:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.725 resv_hugepages=0 00:04:09.725 13:15:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.725 surplus_hugepages=0 00:04:09.725 13:15:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.725 anon_hugepages=0 00:04:09.725 13:15:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.725 13:15:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.725 13:15:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.725 13:15:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.725 13:15:06 -- setup/common.sh@18 -- # local node= 00:04:09.725 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.725 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.725 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.725 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.725 13:15:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.725 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.725 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103032188 kB' 'MemAvailable: 106763520 kB' 'Buffers: 2704 kB' 'Cached: 16247144 kB' 'SwapCached: 0 kB' 'Active: 13122112 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642312 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569160 kB' 'Mapped: 176832 kB' 'Shmem: 12076488 kB' 'KReclaimable: 598916 kB' 'Slab: 1488484 kB' 'SReclaimable: 598916 kB' 'SUnreclaim: 889568 kB' 'KernelStack: 27360 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14221888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236124 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.725 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.725 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.726 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.726 13:15:06 -- setup/common.sh@33 -- # echo 1024 00:04:09.726 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.726 13:15:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.726 13:15:06 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.726 13:15:06 -- setup/hugepages.sh@27 -- # local node 00:04:09.726 13:15:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.726 13:15:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.726 13:15:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.726 13:15:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.726 13:15:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.726 13:15:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.726 13:15:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.726 13:15:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.726 13:15:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.726 13:15:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.726 13:15:06 -- setup/common.sh@18 -- # local node=0 00:04:09.726 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.726 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.726 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.726 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.726 13:15:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.726 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.726 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.726 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59392208 kB' 'MemUsed: 6266800 kB' 'SwapCached: 0 kB' 'Active: 2821916 kB' 'Inactive: 235936 kB' 'Active(anon): 2582492 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801240 kB' 'Mapped: 88128 kB' 'AnonPages: 259764 kB' 'Shmem: 2325880 kB' 'KernelStack: 15288 kB' 'PageTables: 5280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273472 kB' 'Slab: 775756 kB' 'SReclaimable: 273472 kB' 'SUnreclaim: 502284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.727 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.727 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.727 13:15:06 -- setup/common.sh@33 -- # echo 0 00:04:09.727 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.727 13:15:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.727 13:15:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.727 13:15:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.727 13:15:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.727 13:15:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.727 13:15:06 -- setup/common.sh@18 -- # local node=1 00:04:09.727 13:15:06 -- setup/common.sh@19 -- # local var val 00:04:09.727 13:15:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.727 13:15:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.728 13:15:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.728 13:15:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.728 13:15:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.728 13:15:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 43638576 kB' 'MemUsed: 17041260 kB' 'SwapCached: 0 kB' 'Active: 10305464 kB' 'Inactive: 3457624 kB' 'Active(anon): 10065088 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13448652 kB' 'Mapped: 88704 kB' 'AnonPages: 315116 kB' 'Shmem: 9750652 kB' 'KernelStack: 12168 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325444 kB' 'Slab: 712728 kB' 'SReclaimable: 325444 kB' 'SUnreclaim: 387284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.728 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.728 13:15:06 -- setup/common.sh@32 -- # continue 00:04:09.729 13:15:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.729 13:15:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.729 13:15:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.729 13:15:06 -- setup/common.sh@33 -- # echo 0 00:04:09.729 13:15:06 -- setup/common.sh@33 -- # return 0 00:04:09.729 13:15:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.729 13:15:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.729 13:15:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.729 13:15:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.729 node0=512 expecting 512 00:04:09.729 13:15:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.729 13:15:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.729 13:15:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.729 13:15:06 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:09.729 node1=512 expecting 512 00:04:09.729 13:15:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.729 00:04:09.729 real 0m3.930s 00:04:09.729 user 0m1.579s 00:04:09.729 sys 0m2.417s 00:04:09.729 13:15:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.729 13:15:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.729 ************************************ 00:04:09.729 END TEST per_node_1G_alloc 00:04:09.729 ************************************ 00:04:09.729 13:15:06 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.729 13:15:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.729 13:15:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.729 13:15:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.729 ************************************ 00:04:09.729 START TEST even_2G_alloc 00:04:09.729 ************************************ 00:04:09.729 13:15:06 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:09.729 13:15:06 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.729 13:15:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.729 13:15:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.729 13:15:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.729 13:15:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.729 13:15:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.729 13:15:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.729 13:15:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.729 13:15:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.729 13:15:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.729 13:15:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.729 13:15:06 -- setup/hugepages.sh@83 -- # : 512 00:04:09.729 13:15:06 -- setup/hugepages.sh@84 -- # : 1 00:04:09.729 13:15:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.729 13:15:06 -- setup/hugepages.sh@83 -- # : 0 00:04:09.729 13:15:06 -- setup/hugepages.sh@84 -- # : 0 00:04:09.729 13:15:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.729 13:15:06 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.729 13:15:06 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.729 13:15:06 -- setup/hugepages.sh@153 -- # setup output 00:04:09.729 13:15:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.729 13:15:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.038 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:13.038 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.038 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.304 13:15:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:13.304 13:15:10 -- setup/hugepages.sh@89 -- # local node 00:04:13.304 13:15:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.304 13:15:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.304 13:15:10 -- setup/hugepages.sh@92 -- # local surp 00:04:13.304 13:15:10 -- setup/hugepages.sh@93 -- # local resv 00:04:13.304 13:15:10 -- setup/hugepages.sh@94 -- # local anon 00:04:13.304 13:15:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.304 13:15:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.304 13:15:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.304 13:15:10 -- setup/common.sh@18 -- # local node= 00:04:13.304 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.304 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.304 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.304 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.304 13:15:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.304 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.304 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103060356 kB' 'MemAvailable: 106791680 kB' 'Buffers: 2704 kB' 'Cached: 16247276 kB' 'SwapCached: 0 kB' 'Active: 13120588 kB' 'Inactive: 3693560 kB' 'Active(anon): 12640788 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567136 kB' 'Mapped: 176648 kB' 'Shmem: 12076620 kB' 'KReclaimable: 598908 kB' 'Slab: 1488476 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889568 kB' 'KernelStack: 27360 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14216640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235880 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.304 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.304 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.305 13:15:10 -- setup/common.sh@33 -- # echo 0 00:04:13.305 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.305 13:15:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.305 13:15:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.305 13:15:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.305 13:15:10 -- setup/common.sh@18 -- # local node= 00:04:13.305 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.305 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.305 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.305 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.305 13:15:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.305 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.305 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103055708 kB' 'MemAvailable: 106787032 kB' 'Buffers: 2704 kB' 'Cached: 16247280 kB' 'SwapCached: 0 kB' 'Active: 13122956 kB' 'Inactive: 3693560 kB' 'Active(anon): 12643156 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569540 kB' 'Mapped: 176924 kB' 'Shmem: 12076624 kB' 'KReclaimable: 598908 kB' 'Slab: 1488460 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889552 kB' 'KernelStack: 27328 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14218120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.305 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.305 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.306 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.306 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.307 13:15:10 -- setup/common.sh@33 -- # echo 0 00:04:13.307 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.307 13:15:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.307 13:15:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.307 13:15:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.307 13:15:10 -- setup/common.sh@18 -- # local node= 00:04:13.307 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.307 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.307 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.307 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.307 13:15:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.307 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.307 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103057472 kB' 'MemAvailable: 106788796 kB' 'Buffers: 2704 kB' 'Cached: 16247296 kB' 'SwapCached: 0 kB' 'Active: 13121952 kB' 'Inactive: 3693560 kB' 'Active(anon): 12642152 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568872 kB' 'Mapped: 176900 kB' 'Shmem: 12076640 kB' 'KReclaimable: 598908 kB' 'Slab: 1488452 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889544 kB' 'KernelStack: 27296 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14218136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.307 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.307 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.308 13:15:10 -- setup/common.sh@33 -- # echo 0 00:04:13.308 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.308 13:15:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.308 13:15:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.308 nr_hugepages=1024 00:04:13.308 13:15:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.308 resv_hugepages=0 00:04:13.308 13:15:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.308 surplus_hugepages=0 00:04:13.308 13:15:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.308 anon_hugepages=0 00:04:13.308 13:15:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.308 13:15:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.308 13:15:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.308 13:15:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.308 13:15:10 -- setup/common.sh@18 -- # local node= 00:04:13.308 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.308 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.308 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.308 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.308 13:15:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.308 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.308 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103057724 kB' 'MemAvailable: 106789048 kB' 'Buffers: 2704 kB' 'Cached: 16247320 kB' 'SwapCached: 0 kB' 'Active: 13117248 kB' 'Inactive: 3693560 kB' 'Active(anon): 12637448 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564316 kB' 'Mapped: 176396 kB' 'Shmem: 12076664 kB' 'KReclaimable: 598908 kB' 'Slab: 1488452 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889544 kB' 'KernelStack: 27376 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14211800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235816 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.308 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.308 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.309 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.309 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.573 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.573 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.573 13:15:10 -- setup/common.sh@33 -- # echo 1024 00:04:13.573 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.573 13:15:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.574 13:15:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.574 13:15:10 -- setup/hugepages.sh@27 -- # local node 00:04:13.574 13:15:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.574 13:15:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.574 13:15:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.574 13:15:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.574 13:15:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.574 13:15:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.574 13:15:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.574 13:15:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.574 13:15:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.574 13:15:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.574 13:15:10 -- setup/common.sh@18 -- # local node=0 00:04:13.574 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.574 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.574 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.574 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.574 13:15:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.574 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.574 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59421444 kB' 'MemUsed: 6237564 kB' 'SwapCached: 0 kB' 'Active: 2821864 kB' 'Inactive: 235936 kB' 'Active(anon): 2582440 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801328 kB' 'Mapped: 87976 kB' 'AnonPages: 259700 kB' 'Shmem: 2325968 kB' 'KernelStack: 15176 kB' 'PageTables: 5088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273464 kB' 'Slab: 775324 kB' 'SReclaimable: 273464 kB' 'SUnreclaim: 501860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.574 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.574 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@33 -- # echo 0 00:04:13.575 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.575 13:15:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.575 13:15:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.575 13:15:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.575 13:15:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.575 13:15:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.575 13:15:10 -- setup/common.sh@18 -- # local node=1 00:04:13.575 13:15:10 -- setup/common.sh@19 -- # local var val 00:04:13.575 13:15:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.575 13:15:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.575 13:15:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.575 13:15:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.575 13:15:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.575 13:15:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 43633560 kB' 'MemUsed: 17046276 kB' 'SwapCached: 0 kB' 'Active: 10299908 kB' 'Inactive: 3457624 kB' 'Active(anon): 10059532 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13448708 kB' 'Mapped: 88524 kB' 'AnonPages: 308988 kB' 'Shmem: 9750708 kB' 'KernelStack: 12152 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325444 kB' 'Slab: 713112 kB' 'SReclaimable: 325444 kB' 'SUnreclaim: 387668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.575 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.575 13:15:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # continue 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.576 13:15:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.576 13:15:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.576 13:15:10 -- setup/common.sh@33 -- # echo 0 00:04:13.576 13:15:10 -- setup/common.sh@33 -- # return 0 00:04:13.576 13:15:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.576 13:15:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.576 13:15:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.576 13:15:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.576 node0=512 expecting 512 00:04:13.576 13:15:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.576 13:15:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.576 13:15:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.576 13:15:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.576 node1=512 expecting 512 00:04:13.576 13:15:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.576 00:04:13.576 real 0m3.929s 00:04:13.576 user 0m1.561s 00:04:13.576 sys 0m2.430s 00:04:13.576 13:15:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.576 13:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:13.576 ************************************ 00:04:13.576 END TEST even_2G_alloc 00:04:13.576 ************************************ 00:04:13.576 13:15:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:13.576 13:15:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.576 13:15:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.576 13:15:10 -- common/autotest_common.sh@10 -- # set +x 00:04:13.576 ************************************ 00:04:13.576 START TEST odd_alloc 00:04:13.576 ************************************ 00:04:13.576 13:15:10 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:13.576 13:15:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:13.576 13:15:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:13.576 13:15:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:13.576 13:15:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.576 13:15:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.576 13:15:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.576 13:15:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:13.576 13:15:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.576 13:15:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.576 13:15:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.576 13:15:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.576 13:15:10 -- setup/hugepages.sh@83 -- # : 513 00:04:13.576 13:15:10 -- setup/hugepages.sh@84 -- # : 1 00:04:13.576 13:15:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:13.576 13:15:10 -- setup/hugepages.sh@83 -- # : 0 00:04:13.576 13:15:10 -- setup/hugepages.sh@84 -- # : 0 00:04:13.576 13:15:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.576 13:15:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.576 13:15:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.576 13:15:10 -- setup/hugepages.sh@160 -- # setup output 00:04:13.576 13:15:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.576 13:15:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.885 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:16.885 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:16.885 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.148 13:15:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:17.149 13:15:14 -- setup/hugepages.sh@89 -- # local node 00:04:17.149 13:15:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.149 13:15:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.149 13:15:14 -- setup/hugepages.sh@92 -- # local surp 00:04:17.149 13:15:14 -- setup/hugepages.sh@93 -- # local resv 00:04:17.149 13:15:14 -- setup/hugepages.sh@94 -- # local anon 00:04:17.149 13:15:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.149 13:15:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.149 13:15:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.149 13:15:14 -- setup/common.sh@18 -- # local node= 00:04:17.149 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.149 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.149 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.149 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.149 13:15:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.149 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.149 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103085860 kB' 'MemAvailable: 106817184 kB' 'Buffers: 2704 kB' 'Cached: 16247432 kB' 'SwapCached: 0 kB' 'Active: 13121532 kB' 'Inactive: 3693560 kB' 'Active(anon): 12641732 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568308 kB' 'Mapped: 176596 kB' 'Shmem: 12076776 kB' 'KReclaimable: 598908 kB' 'Slab: 1488472 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889564 kB' 'KernelStack: 27216 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 14215720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.149 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.150 13:15:14 -- setup/common.sh@33 -- # echo 0 00:04:17.150 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.150 13:15:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.150 13:15:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.150 13:15:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.150 13:15:14 -- setup/common.sh@18 -- # local node= 00:04:17.150 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.150 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.150 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.150 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.150 13:15:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.150 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.150 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103087600 kB' 'MemAvailable: 106818924 kB' 'Buffers: 2704 kB' 'Cached: 16247436 kB' 'SwapCached: 0 kB' 'Active: 13115112 kB' 'Inactive: 3693560 kB' 'Active(anon): 12635312 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561948 kB' 'Mapped: 175940 kB' 'Shmem: 12076780 kB' 'KReclaimable: 598908 kB' 'Slab: 1488520 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889612 kB' 'KernelStack: 27200 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 14209612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 13:15:14 -- setup/common.sh@33 -- # echo 0 00:04:17.151 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.151 13:15:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.151 13:15:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.151 13:15:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.151 13:15:14 -- setup/common.sh@18 -- # local node= 00:04:17.151 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.151 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.151 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.151 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.151 13:15:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.151 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.151 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103086844 kB' 'MemAvailable: 106818168 kB' 'Buffers: 2704 kB' 'Cached: 16247448 kB' 'SwapCached: 0 kB' 'Active: 13115024 kB' 'Inactive: 3693560 kB' 'Active(anon): 12635224 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561820 kB' 'Mapped: 175940 kB' 'Shmem: 12076792 kB' 'KReclaimable: 598908 kB' 'Slab: 1488520 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889612 kB' 'KernelStack: 27184 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 14209628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.151 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.151 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.152 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.153 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.153 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.153 13:15:14 -- setup/common.sh@33 -- # echo 0 00:04:17.153 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.153 13:15:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.153 13:15:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:17.153 nr_hugepages=1025 00:04:17.153 13:15:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.153 resv_hugepages=0 00:04:17.153 13:15:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.153 surplus_hugepages=0 00:04:17.153 13:15:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.153 anon_hugepages=0 00:04:17.418 13:15:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.418 13:15:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:17.418 13:15:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.418 13:15:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.418 13:15:14 -- setup/common.sh@18 -- # local node= 00:04:17.418 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.418 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.418 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.418 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.418 13:15:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.418 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.418 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103087840 kB' 'MemAvailable: 106819164 kB' 'Buffers: 2704 kB' 'Cached: 16247472 kB' 'SwapCached: 0 kB' 'Active: 13114796 kB' 'Inactive: 3693560 kB' 'Active(anon): 12634996 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561576 kB' 'Mapped: 175940 kB' 'Shmem: 12076816 kB' 'KReclaimable: 598908 kB' 'Slab: 1488520 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889612 kB' 'KernelStack: 27184 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 14209644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.418 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.418 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.419 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.419 13:15:14 -- setup/common.sh@33 -- # echo 1025 00:04:17.419 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.419 13:15:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.419 13:15:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.419 13:15:14 -- setup/hugepages.sh@27 -- # local node 00:04:17.419 13:15:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.419 13:15:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.419 13:15:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.419 13:15:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:17.419 13:15:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.419 13:15:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.419 13:15:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.419 13:15:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.419 13:15:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.419 13:15:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.419 13:15:14 -- setup/common.sh@18 -- # local node=0 00:04:17.419 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.419 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.419 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.419 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.419 13:15:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.419 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.419 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.419 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59440128 kB' 'MemUsed: 6218880 kB' 'SwapCached: 0 kB' 'Active: 2821260 kB' 'Inactive: 235936 kB' 'Active(anon): 2581836 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801476 kB' 'Mapped: 87976 kB' 'AnonPages: 258940 kB' 'Shmem: 2326116 kB' 'KernelStack: 15080 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273464 kB' 'Slab: 775264 kB' 'SReclaimable: 273464 kB' 'SUnreclaim: 501800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.420 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.420 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.420 13:15:14 -- setup/common.sh@33 -- # echo 0 00:04:17.420 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.420 13:15:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.420 13:15:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.420 13:15:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.420 13:15:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.420 13:15:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.420 13:15:14 -- setup/common.sh@18 -- # local node=1 00:04:17.421 13:15:14 -- setup/common.sh@19 -- # local var val 00:04:17.421 13:15:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.421 13:15:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.421 13:15:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.421 13:15:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.421 13:15:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.421 13:15:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 43647716 kB' 'MemUsed: 17032120 kB' 'SwapCached: 0 kB' 'Active: 10293560 kB' 'Inactive: 3457624 kB' 'Active(anon): 10053184 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13448716 kB' 'Mapped: 87964 kB' 'AnonPages: 302636 kB' 'Shmem: 9750716 kB' 'KernelStack: 12104 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325444 kB' 'Slab: 713256 kB' 'SReclaimable: 325444 kB' 'SUnreclaim: 387812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.421 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.421 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.422 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.422 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.422 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.422 13:15:14 -- setup/common.sh@32 -- # continue 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.422 13:15:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.422 13:15:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.422 13:15:14 -- setup/common.sh@33 -- # echo 0 00:04:17.422 13:15:14 -- setup/common.sh@33 -- # return 0 00:04:17.422 13:15:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.422 13:15:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.422 13:15:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:17.422 node0=512 expecting 513 00:04:17.422 13:15:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.422 13:15:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.422 13:15:14 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:17.422 node1=513 expecting 512 00:04:17.422 13:15:14 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:17.422 00:04:17.422 real 0m3.843s 00:04:17.422 user 0m1.543s 00:04:17.422 sys 0m2.357s 00:04:17.422 13:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.422 13:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:17.422 ************************************ 00:04:17.422 END TEST odd_alloc 00:04:17.422 ************************************ 00:04:17.422 13:15:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:17.422 13:15:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.422 13:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.422 13:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:17.422 ************************************ 00:04:17.422 START TEST custom_alloc 00:04:17.422 ************************************ 00:04:17.422 13:15:14 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:17.422 13:15:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:17.422 13:15:14 -- setup/hugepages.sh@169 -- # local node 00:04:17.422 13:15:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:17.422 13:15:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:17.422 13:15:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:17.422 13:15:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.422 13:15:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.422 13:15:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.422 13:15:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.422 13:15:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.422 13:15:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.422 13:15:14 -- setup/hugepages.sh@83 -- # : 256 00:04:17.422 13:15:14 -- setup/hugepages.sh@84 -- # : 1 00:04:17.422 13:15:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.422 13:15:14 -- setup/hugepages.sh@83 -- # : 0 00:04:17.422 13:15:14 -- setup/hugepages.sh@84 -- # : 0 00:04:17.422 13:15:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:17.422 13:15:14 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:17.422 13:15:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.422 13:15:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.422 13:15:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.422 13:15:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.422 13:15:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.422 13:15:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.422 13:15:14 -- setup/hugepages.sh@78 -- # return 0 00:04:17.422 13:15:14 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:17.422 13:15:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.422 13:15:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.422 13:15:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.422 13:15:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.422 13:15:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.422 13:15:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.422 13:15:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:17.422 13:15:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.422 13:15:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.422 13:15:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:17.422 13:15:14 -- setup/hugepages.sh@78 -- # return 0 00:04:17.422 13:15:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:17.422 13:15:14 -- setup/hugepages.sh@187 -- # setup output 00:04:17.422 13:15:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.422 13:15:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.729 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.729 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:20.730 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:20.730 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:20.994 13:15:18 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:20.994 13:15:18 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:20.994 13:15:18 -- setup/hugepages.sh@89 -- # local node 00:04:20.994 13:15:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.994 13:15:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.994 13:15:18 -- setup/hugepages.sh@92 -- # local surp 00:04:20.994 13:15:18 -- setup/hugepages.sh@93 -- # local resv 00:04:20.994 13:15:18 -- setup/hugepages.sh@94 -- # local anon 00:04:20.994 13:15:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.994 13:15:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.994 13:15:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.994 13:15:18 -- setup/common.sh@18 -- # local node= 00:04:20.994 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:20.994 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.994 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.994 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.994 13:15:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.994 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.994 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102060488 kB' 'MemAvailable: 105791812 kB' 'Buffers: 2704 kB' 'Cached: 16247584 kB' 'SwapCached: 0 kB' 'Active: 13117812 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638012 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564500 kB' 'Mapped: 176012 kB' 'Shmem: 12076928 kB' 'KReclaimable: 598908 kB' 'Slab: 1489088 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890180 kB' 'KernelStack: 27184 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 14213708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.994 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.994 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.995 13:15:18 -- setup/common.sh@33 -- # echo 0 00:04:20.995 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:20.995 13:15:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.995 13:15:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.995 13:15:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.995 13:15:18 -- setup/common.sh@18 -- # local node= 00:04:20.995 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:20.995 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.995 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.995 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.995 13:15:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.995 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.995 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102064424 kB' 'MemAvailable: 105795748 kB' 'Buffers: 2704 kB' 'Cached: 16247588 kB' 'SwapCached: 0 kB' 'Active: 13117364 kB' 'Inactive: 3693560 kB' 'Active(anon): 12637564 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564012 kB' 'Mapped: 176028 kB' 'Shmem: 12076932 kB' 'KReclaimable: 598908 kB' 'Slab: 1489080 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890172 kB' 'KernelStack: 27184 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 14215368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.995 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.995 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.996 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.996 13:15:18 -- setup/common.sh@33 -- # echo 0 00:04:20.996 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:20.996 13:15:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.996 13:15:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.996 13:15:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.996 13:15:18 -- setup/common.sh@18 -- # local node= 00:04:20.996 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:20.996 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.996 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.996 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.996 13:15:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.996 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.996 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.996 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102071676 kB' 'MemAvailable: 105803000 kB' 'Buffers: 2704 kB' 'Cached: 16247600 kB' 'SwapCached: 0 kB' 'Active: 13116468 kB' 'Inactive: 3693560 kB' 'Active(anon): 12636668 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563128 kB' 'Mapped: 176012 kB' 'Shmem: 12076944 kB' 'KReclaimable: 598908 kB' 'Slab: 1489080 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890172 kB' 'KernelStack: 27392 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 14213736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # continue 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.997 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.997 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.265 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.265 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.266 13:15:18 -- setup/common.sh@33 -- # echo 0 00:04:21.266 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:21.266 13:15:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.266 13:15:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:21.266 nr_hugepages=1536 00:04:21.266 13:15:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.266 resv_hugepages=0 00:04:21.266 13:15:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.266 surplus_hugepages=0 00:04:21.266 13:15:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.266 anon_hugepages=0 00:04:21.266 13:15:18 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.266 13:15:18 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:21.266 13:15:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.266 13:15:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.266 13:15:18 -- setup/common.sh@18 -- # local node= 00:04:21.266 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:21.266 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.266 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.266 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.266 13:15:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.266 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.266 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.266 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.266 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102071120 kB' 'MemAvailable: 105802444 kB' 'Buffers: 2704 kB' 'Cached: 16247620 kB' 'SwapCached: 0 kB' 'Active: 13116800 kB' 'Inactive: 3693560 kB' 'Active(anon): 12637000 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563420 kB' 'Mapped: 175952 kB' 'Shmem: 12076964 kB' 'KReclaimable: 598908 kB' 'Slab: 1489272 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890364 kB' 'KernelStack: 27344 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 14215400 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236056 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.266 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.267 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.267 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.268 13:15:18 -- setup/common.sh@33 -- # echo 1536 00:04:21.268 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:21.268 13:15:18 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.268 13:15:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.268 13:15:18 -- setup/hugepages.sh@27 -- # local node 00:04:21.268 13:15:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.268 13:15:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.268 13:15:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.268 13:15:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.268 13:15:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.268 13:15:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.268 13:15:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.268 13:15:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.268 13:15:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.268 13:15:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.268 13:15:18 -- setup/common.sh@18 -- # local node=0 00:04:21.268 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:21.268 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.268 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.268 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.268 13:15:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.268 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.268 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59447068 kB' 'MemUsed: 6211940 kB' 'SwapCached: 0 kB' 'Active: 2822496 kB' 'Inactive: 235936 kB' 'Active(anon): 2583072 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801556 kB' 'Mapped: 87976 kB' 'AnonPages: 260076 kB' 'Shmem: 2326196 kB' 'KernelStack: 15112 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273464 kB' 'Slab: 775444 kB' 'SReclaimable: 273464 kB' 'SUnreclaim: 501980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.268 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.268 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.269 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.269 13:15:18 -- setup/common.sh@33 -- # echo 0 00:04:21.269 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:21.269 13:15:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.269 13:15:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.269 13:15:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.269 13:15:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.269 13:15:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.269 13:15:18 -- setup/common.sh@18 -- # local node=1 00:04:21.269 13:15:18 -- setup/common.sh@19 -- # local var val 00:04:21.269 13:15:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.269 13:15:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.269 13:15:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.269 13:15:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.269 13:15:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.269 13:15:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.269 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 42622476 kB' 'MemUsed: 18057360 kB' 'SwapCached: 0 kB' 'Active: 10294576 kB' 'Inactive: 3457624 kB' 'Active(anon): 10054200 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13448776 kB' 'Mapped: 87976 kB' 'AnonPages: 303552 kB' 'Shmem: 9750776 kB' 'KernelStack: 12232 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 325444 kB' 'Slab: 713828 kB' 'SReclaimable: 325444 kB' 'SUnreclaim: 388384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.270 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.270 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # continue 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.271 13:15:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.271 13:15:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.271 13:15:18 -- setup/common.sh@33 -- # echo 0 00:04:21.271 13:15:18 -- setup/common.sh@33 -- # return 0 00:04:21.271 13:15:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.271 13:15:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.271 13:15:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.271 13:15:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.271 13:15:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.271 node0=512 expecting 512 00:04:21.271 13:15:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.271 13:15:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.271 13:15:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.271 13:15:18 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:21.271 node1=1024 expecting 1024 00:04:21.271 13:15:18 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:21.271 00:04:21.271 real 0m3.827s 00:04:21.271 user 0m1.553s 00:04:21.271 sys 0m2.339s 00:04:21.271 13:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.271 13:15:18 -- common/autotest_common.sh@10 -- # set +x 00:04:21.271 ************************************ 00:04:21.271 END TEST custom_alloc 00:04:21.271 ************************************ 00:04:21.271 13:15:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.271 13:15:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.271 13:15:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.271 13:15:18 -- common/autotest_common.sh@10 -- # set +x 00:04:21.271 ************************************ 00:04:21.271 START TEST no_shrink_alloc 00:04:21.271 ************************************ 00:04:21.271 13:15:18 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:21.271 13:15:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.271 13:15:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.271 13:15:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.271 13:15:18 -- setup/hugepages.sh@51 -- # shift 00:04:21.271 13:15:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.271 13:15:18 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.271 13:15:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.271 13:15:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.271 13:15:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.271 13:15:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.271 13:15:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.271 13:15:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.271 13:15:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.271 13:15:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.271 13:15:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.271 13:15:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.271 13:15:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.271 13:15:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.271 13:15:18 -- setup/hugepages.sh@73 -- # return 0 00:04:21.271 13:15:18 -- setup/hugepages.sh@198 -- # setup output 00:04:21.271 13:15:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.271 13:15:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.633 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:24.633 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:24.633 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:24.898 13:15:22 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:24.898 13:15:22 -- setup/hugepages.sh@89 -- # local node 00:04:24.898 13:15:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.898 13:15:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.898 13:15:22 -- setup/hugepages.sh@92 -- # local surp 00:04:24.898 13:15:22 -- setup/hugepages.sh@93 -- # local resv 00:04:24.898 13:15:22 -- setup/hugepages.sh@94 -- # local anon 00:04:24.898 13:15:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.898 13:15:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.898 13:15:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.898 13:15:22 -- setup/common.sh@18 -- # local node= 00:04:24.898 13:15:22 -- setup/common.sh@19 -- # local var val 00:04:24.898 13:15:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.898 13:15:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.898 13:15:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.898 13:15:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.898 13:15:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.898 13:15:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103147512 kB' 'MemAvailable: 106878836 kB' 'Buffers: 2704 kB' 'Cached: 16247732 kB' 'SwapCached: 0 kB' 'Active: 13119328 kB' 'Inactive: 3693560 kB' 'Active(anon): 12639528 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565732 kB' 'Mapped: 176036 kB' 'Shmem: 12077076 kB' 'KReclaimable: 598908 kB' 'Slab: 1488864 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889956 kB' 'KernelStack: 27424 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14216280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236152 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.898 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.898 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.899 13:15:22 -- setup/common.sh@33 -- # echo 0 00:04:24.899 13:15:22 -- setup/common.sh@33 -- # return 0 00:04:24.899 13:15:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.899 13:15:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.899 13:15:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.899 13:15:22 -- setup/common.sh@18 -- # local node= 00:04:24.899 13:15:22 -- setup/common.sh@19 -- # local var val 00:04:24.899 13:15:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.899 13:15:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.899 13:15:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.899 13:15:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.899 13:15:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.899 13:15:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103149696 kB' 'MemAvailable: 106881020 kB' 'Buffers: 2704 kB' 'Cached: 16247736 kB' 'SwapCached: 0 kB' 'Active: 13118176 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638376 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564652 kB' 'Mapped: 176020 kB' 'Shmem: 12077080 kB' 'KReclaimable: 598908 kB' 'Slab: 1488848 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889940 kB' 'KernelStack: 27264 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14214644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236088 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.899 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.899 13:15:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.900 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.900 13:15:22 -- setup/common.sh@33 -- # echo 0 00:04:24.900 13:15:22 -- setup/common.sh@33 -- # return 0 00:04:24.900 13:15:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.900 13:15:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.900 13:15:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.900 13:15:22 -- setup/common.sh@18 -- # local node= 00:04:24.900 13:15:22 -- setup/common.sh@19 -- # local var val 00:04:24.900 13:15:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.900 13:15:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.900 13:15:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.900 13:15:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.900 13:15:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.900 13:15:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.900 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103147596 kB' 'MemAvailable: 106878920 kB' 'Buffers: 2704 kB' 'Cached: 16247748 kB' 'SwapCached: 0 kB' 'Active: 13118620 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638820 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565056 kB' 'Mapped: 175984 kB' 'Shmem: 12077092 kB' 'KReclaimable: 598908 kB' 'Slab: 1488904 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889996 kB' 'KernelStack: 27392 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14216308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236056 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.901 13:15:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.902 13:15:22 -- setup/common.sh@33 -- # echo 0 00:04:24.902 13:15:22 -- setup/common.sh@33 -- # return 0 00:04:24.902 13:15:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.902 13:15:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.902 nr_hugepages=1024 00:04:24.902 13:15:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.902 resv_hugepages=0 00:04:24.902 13:15:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.902 surplus_hugepages=0 00:04:24.902 13:15:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.902 anon_hugepages=0 00:04:24.902 13:15:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.902 13:15:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.902 13:15:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.902 13:15:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.902 13:15:22 -- setup/common.sh@18 -- # local node= 00:04:24.902 13:15:22 -- setup/common.sh@19 -- # local var val 00:04:24.902 13:15:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.902 13:15:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.902 13:15:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.902 13:15:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.902 13:15:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.902 13:15:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103159620 kB' 'MemAvailable: 106890944 kB' 'Buffers: 2704 kB' 'Cached: 16247764 kB' 'SwapCached: 0 kB' 'Active: 13118192 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638392 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564612 kB' 'Mapped: 175984 kB' 'Shmem: 12077108 kB' 'KReclaimable: 598908 kB' 'Slab: 1488900 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 889992 kB' 'KernelStack: 27264 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14216324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236120 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.902 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.903 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.165 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.165 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.166 13:15:22 -- setup/common.sh@33 -- # echo 1024 00:04:25.166 13:15:22 -- setup/common.sh@33 -- # return 0 00:04:25.166 13:15:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.166 13:15:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.166 13:15:22 -- setup/hugepages.sh@27 -- # local node 00:04:25.166 13:15:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.166 13:15:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.166 13:15:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.166 13:15:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.166 13:15:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.166 13:15:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.166 13:15:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.166 13:15:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.166 13:15:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.166 13:15:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.166 13:15:22 -- setup/common.sh@18 -- # local node=0 00:04:25.166 13:15:22 -- setup/common.sh@19 -- # local var val 00:04:25.166 13:15:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.166 13:15:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.166 13:15:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.166 13:15:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.166 13:15:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.166 13:15:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58413700 kB' 'MemUsed: 7245308 kB' 'SwapCached: 0 kB' 'Active: 2822392 kB' 'Inactive: 235936 kB' 'Active(anon): 2582968 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801636 kB' 'Mapped: 87976 kB' 'AnonPages: 259876 kB' 'Shmem: 2326276 kB' 'KernelStack: 15016 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273464 kB' 'Slab: 775292 kB' 'SReclaimable: 273464 kB' 'SUnreclaim: 501828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.166 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.166 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # continue 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.167 13:15:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.167 13:15:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.167 13:15:22 -- setup/common.sh@33 -- # echo 0 00:04:25.167 13:15:22 -- setup/common.sh@33 -- # return 0 00:04:25.167 13:15:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.167 13:15:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.167 13:15:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.167 13:15:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.167 13:15:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.167 node0=1024 expecting 1024 00:04:25.167 13:15:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.167 13:15:22 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.167 13:15:22 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.167 13:15:22 -- setup/hugepages.sh@202 -- # setup output 00:04:25.167 13:15:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.167 13:15:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.474 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:28.474 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:28.474 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:28.739 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:28.739 13:15:26 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:28.739 13:15:26 -- setup/hugepages.sh@89 -- # local node 00:04:28.739 13:15:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.739 13:15:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.739 13:15:26 -- setup/hugepages.sh@92 -- # local surp 00:04:28.739 13:15:26 -- setup/hugepages.sh@93 -- # local resv 00:04:28.739 13:15:26 -- setup/hugepages.sh@94 -- # local anon 00:04:28.739 13:15:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.739 13:15:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.739 13:15:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.739 13:15:26 -- setup/common.sh@18 -- # local node= 00:04:28.739 13:15:26 -- setup/common.sh@19 -- # local var val 00:04:28.739 13:15:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.739 13:15:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.739 13:15:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.739 13:15:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.739 13:15:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.739 13:15:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.739 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.739 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103157972 kB' 'MemAvailable: 106889296 kB' 'Buffers: 2704 kB' 'Cached: 16247860 kB' 'SwapCached: 0 kB' 'Active: 13118204 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638404 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564088 kB' 'Mapped: 176076 kB' 'Shmem: 12077204 kB' 'KReclaimable: 598908 kB' 'Slab: 1489028 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890120 kB' 'KernelStack: 27152 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14212112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.740 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.740 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.741 13:15:26 -- setup/common.sh@33 -- # echo 0 00:04:28.741 13:15:26 -- setup/common.sh@33 -- # return 0 00:04:28.741 13:15:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.741 13:15:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.741 13:15:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.741 13:15:26 -- setup/common.sh@18 -- # local node= 00:04:28.741 13:15:26 -- setup/common.sh@19 -- # local var val 00:04:28.741 13:15:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.741 13:15:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.741 13:15:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.741 13:15:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.741 13:15:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.741 13:15:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103158224 kB' 'MemAvailable: 106889548 kB' 'Buffers: 2704 kB' 'Cached: 16247864 kB' 'SwapCached: 0 kB' 'Active: 13118828 kB' 'Inactive: 3693560 kB' 'Active(anon): 12639028 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564720 kB' 'Mapped: 176076 kB' 'Shmem: 12077208 kB' 'KReclaimable: 598908 kB' 'Slab: 1489016 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890108 kB' 'KernelStack: 27136 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14212120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.741 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.741 13:15:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.742 13:15:26 -- setup/common.sh@33 -- # echo 0 00:04:28.742 13:15:26 -- setup/common.sh@33 -- # return 0 00:04:28.742 13:15:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.742 13:15:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.742 13:15:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.742 13:15:26 -- setup/common.sh@18 -- # local node= 00:04:28.742 13:15:26 -- setup/common.sh@19 -- # local var val 00:04:28.742 13:15:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.742 13:15:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.742 13:15:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.742 13:15:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.742 13:15:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.742 13:15:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103158744 kB' 'MemAvailable: 106890068 kB' 'Buffers: 2704 kB' 'Cached: 16247864 kB' 'SwapCached: 0 kB' 'Active: 13117860 kB' 'Inactive: 3693560 kB' 'Active(anon): 12638060 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564212 kB' 'Mapped: 176000 kB' 'Shmem: 12077208 kB' 'KReclaimable: 598908 kB' 'Slab: 1489036 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890128 kB' 'KernelStack: 27120 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14212136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.742 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.742 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.743 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.743 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.744 13:15:26 -- setup/common.sh@33 -- # echo 0 00:04:28.744 13:15:26 -- setup/common.sh@33 -- # return 0 00:04:28.744 13:15:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.744 13:15:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.744 nr_hugepages=1024 00:04:28.744 13:15:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.744 resv_hugepages=0 00:04:28.744 13:15:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.744 surplus_hugepages=0 00:04:28.744 13:15:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.744 anon_hugepages=0 00:04:28.744 13:15:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.744 13:15:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.744 13:15:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.744 13:15:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.744 13:15:26 -- setup/common.sh@18 -- # local node= 00:04:28.744 13:15:26 -- setup/common.sh@19 -- # local var val 00:04:28.744 13:15:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.744 13:15:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.744 13:15:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.744 13:15:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.744 13:15:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.744 13:15:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103159164 kB' 'MemAvailable: 106890488 kB' 'Buffers: 2704 kB' 'Cached: 16247900 kB' 'SwapCached: 0 kB' 'Active: 13117544 kB' 'Inactive: 3693560 kB' 'Active(anon): 12637744 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563868 kB' 'Mapped: 176000 kB' 'Shmem: 12077244 kB' 'KReclaimable: 598908 kB' 'Slab: 1489036 kB' 'SReclaimable: 598908 kB' 'SUnreclaim: 890128 kB' 'KernelStack: 27120 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 14212152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 161280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4705652 kB' 'DirectMap2M: 29577216 kB' 'DirectMap1G: 101711872 kB' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.744 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.744 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.745 13:15:26 -- setup/common.sh@33 -- # echo 1024 00:04:28.745 13:15:26 -- setup/common.sh@33 -- # return 0 00:04:28.745 13:15:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.745 13:15:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.745 13:15:26 -- setup/hugepages.sh@27 -- # local node 00:04:28.745 13:15:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.745 13:15:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.745 13:15:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.745 13:15:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:28.745 13:15:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.745 13:15:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.745 13:15:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.745 13:15:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.745 13:15:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.745 13:15:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.745 13:15:26 -- setup/common.sh@18 -- # local node=0 00:04:28.745 13:15:26 -- setup/common.sh@19 -- # local var val 00:04:28.745 13:15:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.745 13:15:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.745 13:15:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.745 13:15:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.745 13:15:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.745 13:15:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58397048 kB' 'MemUsed: 7261960 kB' 'SwapCached: 0 kB' 'Active: 2821788 kB' 'Inactive: 235936 kB' 'Active(anon): 2582364 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2801708 kB' 'Mapped: 87976 kB' 'AnonPages: 259220 kB' 'Shmem: 2326348 kB' 'KernelStack: 15032 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 273464 kB' 'Slab: 775292 kB' 'SReclaimable: 273464 kB' 'SUnreclaim: 501828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.745 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.745 13:15:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # continue 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.746 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.746 13:15:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.008 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.008 13:15:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.008 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.008 13:15:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.008 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.008 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # continue 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.009 13:15:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.009 13:15:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.009 13:15:26 -- setup/common.sh@33 -- # echo 0 00:04:29.009 13:15:26 -- setup/common.sh@33 -- # return 0 00:04:29.009 13:15:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.009 13:15:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.009 13:15:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.009 13:15:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.009 13:15:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.009 node0=1024 expecting 1024 00:04:29.009 13:15:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.009 00:04:29.009 real 0m7.580s 00:04:29.009 user 0m3.050s 00:04:29.009 sys 0m4.658s 00:04:29.009 13:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.009 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 ************************************ 00:04:29.009 END TEST no_shrink_alloc 00:04:29.009 ************************************ 00:04:29.009 13:15:26 -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.009 13:15:26 -- setup/hugepages.sh@37 -- # local node hp 00:04:29.009 13:15:26 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.009 13:15:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.009 13:15:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.009 13:15:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.009 13:15:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.009 13:15:26 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.009 13:15:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.009 13:15:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.009 13:15:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.009 13:15:26 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.009 13:15:26 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.009 13:15:26 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.009 00:04:29.009 real 0m27.584s 00:04:29.009 user 0m11.039s 00:04:29.009 sys 0m16.989s 00:04:29.009 13:15:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.009 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 ************************************ 00:04:29.009 END TEST hugepages 00:04:29.009 ************************************ 00:04:29.009 13:15:26 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.009 13:15:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.009 13:15:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.009 13:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 ************************************ 00:04:29.009 START TEST driver 00:04:29.009 ************************************ 00:04:29.009 13:15:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.009 * Looking for test storage... 00:04:29.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.009 13:15:26 -- setup/driver.sh@68 -- # setup reset 00:04:29.009 13:15:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.009 13:15:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.221 13:15:30 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:33.221 13:15:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.221 13:15:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.221 13:15:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.221 ************************************ 00:04:33.221 START TEST guess_driver 00:04:33.221 ************************************ 00:04:33.221 13:15:30 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:33.221 13:15:30 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:33.221 13:15:30 -- setup/driver.sh@47 -- # local fail=0 00:04:33.221 13:15:30 -- setup/driver.sh@49 -- # pick_driver 00:04:33.221 13:15:30 -- setup/driver.sh@36 -- # vfio 00:04:33.221 13:15:30 -- setup/driver.sh@21 -- # local iommu_grups 00:04:33.221 13:15:30 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:33.221 13:15:30 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:33.221 13:15:30 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:33.221 13:15:30 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:33.221 13:15:30 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:33.221 13:15:30 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:33.221 13:15:30 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:33.221 13:15:30 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:33.221 13:15:30 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:33.221 13:15:30 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:33.221 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:33.221 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:33.221 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:33.222 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:33.222 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:33.222 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:33.222 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:33.222 13:15:30 -- setup/driver.sh@30 -- # return 0 00:04:33.222 13:15:30 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:33.222 13:15:30 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:33.222 13:15:30 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:33.222 13:15:30 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:33.222 Looking for driver=vfio-pci 00:04:33.222 13:15:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.222 13:15:30 -- setup/driver.sh@45 -- # setup output config 00:04:33.222 13:15:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.222 13:15:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.432 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.432 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.432 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.433 13:15:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.433 13:15:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.433 13:15:34 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.433 13:15:34 -- setup/driver.sh@65 -- # setup reset 00:04:37.433 13:15:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.433 13:15:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.725 00:04:42.725 real 0m8.844s 00:04:42.725 user 0m2.932s 00:04:42.725 sys 0m5.152s 00:04:42.725 13:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.725 13:15:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.725 ************************************ 00:04:42.725 END TEST guess_driver 00:04:42.725 ************************************ 00:04:42.725 00:04:42.725 real 0m13.177s 00:04:42.725 user 0m3.929s 00:04:42.725 sys 0m7.509s 00:04:42.725 13:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.725 13:15:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.725 ************************************ 00:04:42.725 END TEST driver 00:04:42.725 ************************************ 00:04:42.725 13:15:39 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:42.725 13:15:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.725 13:15:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.725 13:15:39 -- common/autotest_common.sh@10 -- # set +x 00:04:42.725 ************************************ 00:04:42.725 START TEST devices 00:04:42.725 ************************************ 00:04:42.725 13:15:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:42.725 * Looking for test storage... 00:04:42.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.725 13:15:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:42.725 13:15:39 -- setup/devices.sh@192 -- # setup reset 00:04:42.725 13:15:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.725 13:15:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.932 13:15:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:46.932 13:15:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:46.932 13:15:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:46.932 13:15:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:46.932 13:15:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:46.932 13:15:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:46.932 13:15:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:46.932 13:15:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.932 13:15:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:46.932 13:15:43 -- setup/devices.sh@196 -- # blocks=() 00:04:46.932 13:15:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:46.932 13:15:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:46.932 13:15:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:46.932 13:15:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:46.932 13:15:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.932 13:15:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:46.932 13:15:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.932 13:15:43 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:46.932 13:15:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:46.932 13:15:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:46.932 13:15:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:46.932 13:15:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:46.932 No valid GPT data, bailing 00:04:46.932 13:15:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.932 13:15:43 -- scripts/common.sh@393 -- # pt= 00:04:46.933 13:15:43 -- scripts/common.sh@394 -- # return 1 00:04:46.933 13:15:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:46.933 13:15:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:46.933 13:15:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:46.933 13:15:43 -- setup/common.sh@80 -- # echo 1920383410176 00:04:46.933 13:15:43 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:46.933 13:15:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.933 13:15:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:46.933 13:15:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:46.933 13:15:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:46.933 13:15:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:46.933 13:15:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.933 13:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.933 13:15:43 -- common/autotest_common.sh@10 -- # set +x 00:04:46.933 ************************************ 00:04:46.933 START TEST nvme_mount 00:04:46.933 ************************************ 00:04:46.933 13:15:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:46.933 13:15:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:46.933 13:15:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:46.933 13:15:43 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.933 13:15:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.933 13:15:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:46.933 13:15:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.933 13:15:43 -- setup/common.sh@40 -- # local part_no=1 00:04:46.933 13:15:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:46.933 13:15:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.933 13:15:43 -- setup/common.sh@44 -- # parts=() 00:04:46.933 13:15:43 -- setup/common.sh@44 -- # local parts 00:04:46.933 13:15:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.933 13:15:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.933 13:15:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.933 13:15:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.933 13:15:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.933 13:15:43 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:46.933 13:15:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.933 13:15:43 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:47.505 Creating new GPT entries in memory. 00:04:47.505 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.505 other utilities. 00:04:47.505 13:15:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.505 13:15:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.505 13:15:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.505 13:15:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.505 13:15:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:48.523 Creating new GPT entries in memory. 00:04:48.523 The operation has completed successfully. 00:04:48.523 13:15:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:48.523 13:15:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.523 13:15:45 -- setup/common.sh@62 -- # wait 728735 00:04:48.523 13:15:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.523 13:15:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:48.523 13:15:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.523 13:15:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:48.523 13:15:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:48.523 13:15:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.523 13:15:45 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.523 13:15:45 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:48.523 13:15:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:48.523 13:15:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.523 13:15:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.523 13:15:45 -- setup/devices.sh@53 -- # local found=0 00:04:48.523 13:15:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.523 13:15:45 -- setup/devices.sh@56 -- # : 00:04:48.523 13:15:45 -- setup/devices.sh@59 -- # local pci status 00:04:48.523 13:15:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.523 13:15:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:48.523 13:15:45 -- setup/devices.sh@47 -- # setup output config 00:04:48.523 13:15:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.523 13:15:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:51.837 13:15:48 -- setup/devices.sh@63 -- # found=1 00:04:51.837 13:15:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.837 13:15:49 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:51.837 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.098 13:15:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.098 13:15:49 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.098 13:15:49 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.098 13:15:49 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.098 13:15:49 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.098 13:15:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.098 13:15:49 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.098 13:15:49 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.098 13:15:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.098 13:15:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.098 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.098 13:15:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.098 13:15:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.359 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.359 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.359 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.359 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.359 13:15:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:52.359 13:15:49 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:52.359 13:15:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.359 13:15:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:52.359 13:15:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:52.359 13:15:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.360 13:15:49 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.360 13:15:49 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:52.360 13:15:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:52.360 13:15:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.360 13:15:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.360 13:15:49 -- setup/devices.sh@53 -- # local found=0 00:04:52.360 13:15:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.360 13:15:49 -- setup/devices.sh@56 -- # : 00:04:52.360 13:15:49 -- setup/devices.sh@59 -- # local pci status 00:04:52.360 13:15:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.360 13:15:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:52.360 13:15:49 -- setup/devices.sh@47 -- # setup output config 00:04:52.360 13:15:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.360 13:15:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:55.664 13:15:52 -- setup/devices.sh@63 -- # found=1 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.664 13:15:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.664 13:15:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.937 13:15:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.937 13:15:53 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.937 13:15:53 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.937 13:15:53 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.937 13:15:53 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.937 13:15:53 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.937 13:15:53 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:55.937 13:15:53 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:55.937 13:15:53 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:55.937 13:15:53 -- setup/devices.sh@50 -- # local mount_point= 00:04:55.937 13:15:53 -- setup/devices.sh@51 -- # local test_file= 00:04:55.937 13:15:53 -- setup/devices.sh@53 -- # local found=0 00:04:55.937 13:15:53 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.937 13:15:53 -- setup/devices.sh@59 -- # local pci status 00:04:55.937 13:15:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.937 13:15:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:55.937 13:15:53 -- setup/devices.sh@47 -- # setup output config 00:04:55.937 13:15:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.937 13:15:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:59.250 13:15:56 -- setup/devices.sh@63 -- # found=1 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.250 13:15:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.250 13:15:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.823 13:15:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.823 13:15:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.823 13:15:57 -- setup/devices.sh@68 -- # return 0 00:04:59.823 13:15:57 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:59.823 13:15:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.823 13:15:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.823 13:15:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.823 13:15:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.823 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.823 00:04:59.823 real 0m13.392s 00:04:59.823 user 0m4.154s 00:04:59.823 sys 0m7.110s 00:04:59.823 13:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.823 13:15:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.823 ************************************ 00:04:59.823 END TEST nvme_mount 00:04:59.823 ************************************ 00:04:59.823 13:15:57 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:59.823 13:15:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.823 13:15:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.823 13:15:57 -- common/autotest_common.sh@10 -- # set +x 00:04:59.823 ************************************ 00:04:59.823 START TEST dm_mount 00:04:59.823 ************************************ 00:04:59.823 13:15:57 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:59.823 13:15:57 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:59.823 13:15:57 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:59.823 13:15:57 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:59.823 13:15:57 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:59.823 13:15:57 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.823 13:15:57 -- setup/common.sh@40 -- # local part_no=2 00:04:59.823 13:15:57 -- setup/common.sh@41 -- # local size=1073741824 00:04:59.823 13:15:57 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.823 13:15:57 -- setup/common.sh@44 -- # parts=() 00:04:59.823 13:15:57 -- setup/common.sh@44 -- # local parts 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.823 13:15:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.823 13:15:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.823 13:15:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.823 13:15:57 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:59.823 13:15:57 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.823 13:15:57 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:00.767 Creating new GPT entries in memory. 00:05:00.767 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.767 other utilities. 00:05:00.767 13:15:58 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.767 13:15:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.767 13:15:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.767 13:15:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.767 13:15:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:01.711 Creating new GPT entries in memory. 00:05:01.711 The operation has completed successfully. 00:05:01.711 13:15:59 -- setup/common.sh@57 -- # (( part++ )) 00:05:01.711 13:15:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.711 13:15:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.711 13:15:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.711 13:15:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.098 The operation has completed successfully. 00:05:03.098 13:16:00 -- setup/common.sh@57 -- # (( part++ )) 00:05:03.098 13:16:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.098 13:16:00 -- setup/common.sh@62 -- # wait 734018 00:05:03.098 13:16:00 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.098 13:16:00 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.098 13:16:00 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.098 13:16:00 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.098 13:16:00 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.098 13:16:00 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.098 13:16:00 -- setup/devices.sh@161 -- # break 00:05:03.098 13:16:00 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.098 13:16:00 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.098 13:16:00 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:03.098 13:16:00 -- setup/devices.sh@166 -- # dm=dm-0 00:05:03.098 13:16:00 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:03.098 13:16:00 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:03.098 13:16:00 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.098 13:16:00 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.098 13:16:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.098 13:16:00 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.098 13:16:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.098 13:16:00 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.098 13:16:00 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.098 13:16:00 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:03.098 13:16:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.098 13:16:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.098 13:16:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.098 13:16:00 -- setup/devices.sh@53 -- # local found=0 00:05:03.098 13:16:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.098 13:16:00 -- setup/devices.sh@56 -- # : 00:05:03.098 13:16:00 -- setup/devices.sh@59 -- # local pci status 00:05:03.098 13:16:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.098 13:16:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:03.098 13:16:00 -- setup/devices.sh@47 -- # setup output config 00:05:03.098 13:16:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.098 13:16:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.403 13:16:03 -- setup/devices.sh@63 -- # found=1 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.403 13:16:03 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:06.403 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.664 13:16:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.664 13:16:03 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.664 13:16:03 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.664 13:16:03 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.664 13:16:03 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.664 13:16:03 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.664 13:16:03 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.664 13:16:03 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:06.664 13:16:03 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.664 13:16:03 -- setup/devices.sh@50 -- # local mount_point= 00:05:06.664 13:16:03 -- setup/devices.sh@51 -- # local test_file= 00:05:06.664 13:16:03 -- setup/devices.sh@53 -- # local found=0 00:05:06.664 13:16:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.664 13:16:03 -- setup/devices.sh@59 -- # local pci status 00:05:06.664 13:16:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.664 13:16:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:06.664 13:16:03 -- setup/devices.sh@47 -- # setup output config 00:05:06.664 13:16:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.664 13:16:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:09.969 13:16:07 -- setup/devices.sh@63 -- # found=1 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.969 13:16:07 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.969 13:16:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.230 13:16:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.230 13:16:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:10.230 13:16:07 -- setup/devices.sh@68 -- # return 0 00:05:10.230 13:16:07 -- setup/devices.sh@187 -- # cleanup_dm 00:05:10.230 13:16:07 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.230 13:16:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:10.230 13:16:07 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:10.230 13:16:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.230 13:16:07 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:10.230 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:10.230 13:16:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:10.230 13:16:07 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:10.230 00:05:10.230 real 0m10.563s 00:05:10.230 user 0m2.835s 00:05:10.230 sys 0m4.807s 00:05:10.230 13:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.230 13:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.230 ************************************ 00:05:10.230 END TEST dm_mount 00:05:10.230 ************************************ 00:05:10.495 13:16:07 -- setup/devices.sh@1 -- # cleanup 00:05:10.495 13:16:07 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:10.495 13:16:07 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.495 13:16:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.495 13:16:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:10.495 13:16:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.495 13:16:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:10.799 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:10.799 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:10.799 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:10.799 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:10.799 13:16:07 -- setup/devices.sh@12 -- # cleanup_dm 00:05:10.799 13:16:07 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.799 13:16:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:10.799 13:16:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.799 13:16:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:10.799 13:16:07 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.799 13:16:07 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:10.799 00:05:10.799 real 0m28.468s 00:05:10.799 user 0m8.545s 00:05:10.799 sys 0m14.751s 00:05:10.799 13:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.799 13:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.799 ************************************ 00:05:10.799 END TEST devices 00:05:10.799 ************************************ 00:05:10.799 00:05:10.799 real 1m35.462s 00:05:10.799 user 0m32.169s 00:05:10.799 sys 0m54.609s 00:05:10.799 13:16:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.799 13:16:08 -- common/autotest_common.sh@10 -- # set +x 00:05:10.799 ************************************ 00:05:10.799 END TEST setup.sh 00:05:10.799 ************************************ 00:05:10.799 13:16:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:14.105 Hugepages 00:05:14.105 node hugesize free / total 00:05:14.105 node0 1048576kB 0 / 0 00:05:14.105 node0 2048kB 2048 / 2048 00:05:14.105 node1 1048576kB 0 / 0 00:05:14.105 node1 2048kB 0 / 0 00:05:14.105 00:05:14.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.105 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:14.105 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:14.105 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:14.105 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:14.105 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:14.105 13:16:11 -- spdk/autotest.sh@141 -- # uname -s 00:05:14.105 13:16:11 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:14.105 13:16:11 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:14.105 13:16:11 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.411 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:17.411 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:17.671 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:17.671 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:17.671 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:19.660 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:19.660 13:16:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:20.601 13:16:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:20.601 13:16:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:20.601 13:16:17 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.601 13:16:17 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:20.601 13:16:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:20.601 13:16:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:20.601 13:16:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.601 13:16:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:20.601 13:16:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:20.601 13:16:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:20.602 13:16:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:20.602 13:16:18 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.909 Waiting for block devices as requested 00:05:23.909 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:24.170 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:24.170 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:24.170 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:24.432 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:24.432 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:24.432 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:24.693 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:24.693 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:24.955 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:24.955 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:24.955 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:24.955 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:25.217 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:25.217 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:25.217 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:25.217 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:25.484 13:16:22 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:25.484 13:16:22 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:25.484 13:16:22 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.484 13:16:22 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:25.484 13:16:22 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:25.485 13:16:22 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:25.485 13:16:22 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:25.485 13:16:22 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:25.485 13:16:22 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:25.485 13:16:22 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:25.485 13:16:22 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:25.485 13:16:22 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:25.485 13:16:22 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:25.485 13:16:22 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:25.485 13:16:22 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:25.485 13:16:22 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:25.485 13:16:22 -- common/autotest_common.sh@1542 -- # continue 00:05:25.485 13:16:22 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:25.485 13:16:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:25.485 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.746 13:16:22 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:25.746 13:16:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.746 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.746 13:16:22 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.049 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.049 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:29.050 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:29.311 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:29.311 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:29.573 13:16:26 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:29.573 13:16:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.573 13:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.573 13:16:26 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:29.573 13:16:26 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:29.573 13:16:26 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.573 13:16:26 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:29.573 13:16:26 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:29.573 13:16:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:29.573 13:16:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.573 13:16:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.573 13:16:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.573 13:16:26 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.573 13:16:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:29.573 13:16:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:29.573 13:16:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:29.573 13:16:26 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:29.573 13:16:27 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:29.573 13:16:27 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:29.573 13:16:27 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:29.573 13:16:27 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:29.573 13:16:27 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:29.573 13:16:27 -- common/autotest_common.sh@1578 -- # return 0 00:05:29.573 13:16:27 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:29.573 13:16:27 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:29.573 13:16:27 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:29.573 13:16:27 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:29.573 13:16:27 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:29.573 13:16:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.573 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.573 13:16:27 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.573 13:16:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.573 13:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.573 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.573 ************************************ 00:05:29.573 START TEST env 00:05:29.573 ************************************ 00:05:29.573 13:16:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.835 * Looking for test storage... 00:05:29.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:29.835 13:16:27 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.835 13:16:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.835 13:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.835 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.835 ************************************ 00:05:29.835 START TEST env_memory 00:05:29.835 ************************************ 00:05:29.835 13:16:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.835 00:05:29.835 00:05:29.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.835 http://cunit.sourceforge.net/ 00:05:29.835 00:05:29.835 00:05:29.835 Suite: memory 00:05:29.835 Test: alloc and free memory map ...[2024-07-26 13:16:27.183463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.835 passed 00:05:29.835 Test: mem map translation ...[2024-07-26 13:16:27.209189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.835 [2024-07-26 13:16:27.209225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.835 [2024-07-26 13:16:27.209272] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.835 [2024-07-26 13:16:27.209280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.835 passed 00:05:29.835 Test: mem map registration ...[2024-07-26 13:16:27.264605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:29.835 [2024-07-26 13:16:27.264630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:29.835 passed 00:05:30.097 Test: mem map adjacent registrations ...passed 00:05:30.097 00:05:30.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.097 suites 1 1 n/a 0 0 00:05:30.097 tests 4 4 4 0 0 00:05:30.097 asserts 152 152 152 0 n/a 00:05:30.097 00:05:30.097 Elapsed time = 0.196 seconds 00:05:30.097 00:05:30.097 real 0m0.211s 00:05:30.097 user 0m0.198s 00:05:30.097 sys 0m0.012s 00:05:30.097 13:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.097 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:30.097 ************************************ 00:05:30.097 END TEST env_memory 00:05:30.097 ************************************ 00:05:30.097 13:16:27 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.097 13:16:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.097 13:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.097 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:05:30.097 ************************************ 00:05:30.097 START TEST env_vtophys 00:05:30.097 ************************************ 00:05:30.097 13:16:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.097 EAL: lib.eal log level changed from notice to debug 00:05:30.097 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.097 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.097 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.097 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.097 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.097 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.097 EAL: Detected lcore 6 as core 6 on socket 0 00:05:30.097 EAL: Detected lcore 7 as core 7 on socket 0 00:05:30.097 EAL: Detected lcore 8 as core 8 on socket 0 00:05:30.097 EAL: Detected lcore 9 as core 9 on socket 0 00:05:30.098 EAL: Detected lcore 10 as core 10 on socket 0 00:05:30.098 EAL: Detected lcore 11 as core 11 on socket 0 00:05:30.098 EAL: Detected lcore 12 as core 12 on socket 0 00:05:30.098 EAL: Detected lcore 13 as core 13 on socket 0 00:05:30.098 EAL: Detected lcore 14 as core 14 on socket 0 00:05:30.098 EAL: Detected lcore 15 as core 15 on socket 0 00:05:30.098 EAL: Detected lcore 16 as core 16 on socket 0 00:05:30.098 EAL: Detected lcore 17 as core 17 on socket 0 00:05:30.098 EAL: Detected lcore 18 as core 18 on socket 0 00:05:30.098 EAL: Detected lcore 19 as core 19 on socket 0 00:05:30.098 EAL: Detected lcore 20 as core 20 on socket 0 00:05:30.098 EAL: Detected lcore 21 as core 21 on socket 0 00:05:30.098 EAL: Detected lcore 22 as core 22 on socket 0 00:05:30.098 EAL: Detected lcore 23 as core 23 on socket 0 00:05:30.098 EAL: Detected lcore 24 as core 24 on socket 0 00:05:30.098 EAL: Detected lcore 25 as core 25 on socket 0 00:05:30.098 EAL: Detected lcore 26 as core 26 on socket 0 00:05:30.098 EAL: Detected lcore 27 as core 27 on socket 0 00:05:30.098 EAL: Detected lcore 28 as core 28 on socket 0 00:05:30.098 EAL: Detected lcore 29 as core 29 on socket 0 00:05:30.098 EAL: Detected lcore 30 as core 30 on socket 0 00:05:30.098 EAL: Detected lcore 31 as core 31 on socket 0 00:05:30.098 EAL: Detected lcore 32 as core 32 on socket 0 00:05:30.098 EAL: Detected lcore 33 as core 33 on socket 0 00:05:30.098 EAL: Detected lcore 34 as core 34 on socket 0 00:05:30.098 EAL: Detected lcore 35 as core 35 on socket 0 00:05:30.098 EAL: Detected lcore 36 as core 0 on socket 1 00:05:30.098 EAL: Detected lcore 37 as core 1 on socket 1 00:05:30.098 EAL: Detected lcore 38 as core 2 on socket 1 00:05:30.098 EAL: Detected lcore 39 as core 3 on socket 1 00:05:30.098 EAL: Detected lcore 40 as core 4 on socket 1 00:05:30.098 EAL: Detected lcore 41 as core 5 on socket 1 00:05:30.098 EAL: Detected lcore 42 as core 6 on socket 1 00:05:30.098 EAL: Detected lcore 43 as core 7 on socket 1 00:05:30.098 EAL: Detected lcore 44 as core 8 on socket 1 00:05:30.098 EAL: Detected lcore 45 as core 9 on socket 1 00:05:30.098 EAL: Detected lcore 46 as core 10 on socket 1 00:05:30.098 EAL: Detected lcore 47 as core 11 on socket 1 00:05:30.098 EAL: Detected lcore 48 as core 12 on socket 1 00:05:30.098 EAL: Detected lcore 49 as core 13 on socket 1 00:05:30.098 EAL: Detected lcore 50 as core 14 on socket 1 00:05:30.098 EAL: Detected lcore 51 as core 15 on socket 1 00:05:30.098 EAL: Detected lcore 52 as core 16 on socket 1 00:05:30.098 EAL: Detected lcore 53 as core 17 on socket 1 00:05:30.098 EAL: Detected lcore 54 as core 18 on socket 1 00:05:30.098 EAL: Detected lcore 55 as core 19 on socket 1 00:05:30.098 EAL: Detected lcore 56 as core 20 on socket 1 00:05:30.098 EAL: Detected lcore 57 as core 21 on socket 1 00:05:30.098 EAL: Detected lcore 58 as core 22 on socket 1 00:05:30.098 EAL: Detected lcore 59 as core 23 on socket 1 00:05:30.098 EAL: Detected lcore 60 as core 24 on socket 1 00:05:30.098 EAL: Detected lcore 61 as core 25 on socket 1 00:05:30.098 EAL: Detected lcore 62 as core 26 on socket 1 00:05:30.098 EAL: Detected lcore 63 as core 27 on socket 1 00:05:30.098 EAL: Detected lcore 64 as core 28 on socket 1 00:05:30.098 EAL: Detected lcore 65 as core 29 on socket 1 00:05:30.098 EAL: Detected lcore 66 as core 30 on socket 1 00:05:30.098 EAL: Detected lcore 67 as core 31 on socket 1 00:05:30.098 EAL: Detected lcore 68 as core 32 on socket 1 00:05:30.098 EAL: Detected lcore 69 as core 33 on socket 1 00:05:30.098 EAL: Detected lcore 70 as core 34 on socket 1 00:05:30.098 EAL: Detected lcore 71 as core 35 on socket 1 00:05:30.098 EAL: Detected lcore 72 as core 0 on socket 0 00:05:30.098 EAL: Detected lcore 73 as core 1 on socket 0 00:05:30.098 EAL: Detected lcore 74 as core 2 on socket 0 00:05:30.098 EAL: Detected lcore 75 as core 3 on socket 0 00:05:30.098 EAL: Detected lcore 76 as core 4 on socket 0 00:05:30.098 EAL: Detected lcore 77 as core 5 on socket 0 00:05:30.098 EAL: Detected lcore 78 as core 6 on socket 0 00:05:30.098 EAL: Detected lcore 79 as core 7 on socket 0 00:05:30.098 EAL: Detected lcore 80 as core 8 on socket 0 00:05:30.098 EAL: Detected lcore 81 as core 9 on socket 0 00:05:30.098 EAL: Detected lcore 82 as core 10 on socket 0 00:05:30.098 EAL: Detected lcore 83 as core 11 on socket 0 00:05:30.098 EAL: Detected lcore 84 as core 12 on socket 0 00:05:30.098 EAL: Detected lcore 85 as core 13 on socket 0 00:05:30.098 EAL: Detected lcore 86 as core 14 on socket 0 00:05:30.098 EAL: Detected lcore 87 as core 15 on socket 0 00:05:30.098 EAL: Detected lcore 88 as core 16 on socket 0 00:05:30.098 EAL: Detected lcore 89 as core 17 on socket 0 00:05:30.098 EAL: Detected lcore 90 as core 18 on socket 0 00:05:30.098 EAL: Detected lcore 91 as core 19 on socket 0 00:05:30.098 EAL: Detected lcore 92 as core 20 on socket 0 00:05:30.098 EAL: Detected lcore 93 as core 21 on socket 0 00:05:30.098 EAL: Detected lcore 94 as core 22 on socket 0 00:05:30.098 EAL: Detected lcore 95 as core 23 on socket 0 00:05:30.098 EAL: Detected lcore 96 as core 24 on socket 0 00:05:30.098 EAL: Detected lcore 97 as core 25 on socket 0 00:05:30.098 EAL: Detected lcore 98 as core 26 on socket 0 00:05:30.098 EAL: Detected lcore 99 as core 27 on socket 0 00:05:30.098 EAL: Detected lcore 100 as core 28 on socket 0 00:05:30.098 EAL: Detected lcore 101 as core 29 on socket 0 00:05:30.098 EAL: Detected lcore 102 as core 30 on socket 0 00:05:30.098 EAL: Detected lcore 103 as core 31 on socket 0 00:05:30.098 EAL: Detected lcore 104 as core 32 on socket 0 00:05:30.098 EAL: Detected lcore 105 as core 33 on socket 0 00:05:30.098 EAL: Detected lcore 106 as core 34 on socket 0 00:05:30.098 EAL: Detected lcore 107 as core 35 on socket 0 00:05:30.098 EAL: Detected lcore 108 as core 0 on socket 1 00:05:30.098 EAL: Detected lcore 109 as core 1 on socket 1 00:05:30.098 EAL: Detected lcore 110 as core 2 on socket 1 00:05:30.098 EAL: Detected lcore 111 as core 3 on socket 1 00:05:30.098 EAL: Detected lcore 112 as core 4 on socket 1 00:05:30.098 EAL: Detected lcore 113 as core 5 on socket 1 00:05:30.098 EAL: Detected lcore 114 as core 6 on socket 1 00:05:30.098 EAL: Detected lcore 115 as core 7 on socket 1 00:05:30.098 EAL: Detected lcore 116 as core 8 on socket 1 00:05:30.098 EAL: Detected lcore 117 as core 9 on socket 1 00:05:30.098 EAL: Detected lcore 118 as core 10 on socket 1 00:05:30.098 EAL: Detected lcore 119 as core 11 on socket 1 00:05:30.098 EAL: Detected lcore 120 as core 12 on socket 1 00:05:30.098 EAL: Detected lcore 121 as core 13 on socket 1 00:05:30.098 EAL: Detected lcore 122 as core 14 on socket 1 00:05:30.098 EAL: Detected lcore 123 as core 15 on socket 1 00:05:30.098 EAL: Detected lcore 124 as core 16 on socket 1 00:05:30.098 EAL: Detected lcore 125 as core 17 on socket 1 00:05:30.098 EAL: Detected lcore 126 as core 18 on socket 1 00:05:30.098 EAL: Detected lcore 127 as core 19 on socket 1 00:05:30.098 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:30.098 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:30.098 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:30.098 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:30.098 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:30.098 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:30.098 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:30.098 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:30.098 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:30.098 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:30.098 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:30.098 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:30.098 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:30.098 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:30.098 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:30.098 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:30.098 EAL: Maximum logical cores by configuration: 128 00:05:30.098 EAL: Detected CPU lcores: 128 00:05:30.098 EAL: Detected NUMA nodes: 2 00:05:30.098 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:30.098 EAL: Detected shared linkage of DPDK 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:30.098 EAL: Registered [vdev] bus. 00:05:30.098 EAL: bus.vdev log level changed from disabled to notice 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:30.098 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:30.098 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:30.098 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:30.098 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.098 EAL: No shared files mode enabled, IPC is disabled 00:05:30.098 EAL: Bus pci wants IOVA as 'DC' 00:05:30.098 EAL: Bus vdev wants IOVA as 'DC' 00:05:30.098 EAL: Buses did not request a specific IOVA mode. 00:05:30.098 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.098 EAL: Selected IOVA mode 'VA' 00:05:30.098 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.098 EAL: Probing VFIO support... 00:05:30.098 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.098 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.098 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.098 EAL: VFIO support initialized 00:05:30.098 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.098 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.098 EAL: Setting up physically contiguous memory... 00:05:30.098 EAL: Setting maximum number of open files to 524288 00:05:30.098 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.098 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.098 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.098 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.098 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.098 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.098 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.098 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.098 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.099 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.099 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.099 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.099 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.099 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.099 EAL: Hugepages will be freed exactly as allocated. 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: TSC frequency is ~2400000 KHz 00:05:30.099 EAL: Main lcore 0 is ready (tid=7f767a361a00;cpuset=[0]) 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 0 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.099 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.099 00:05:30.099 00:05:30.099 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.099 http://cunit.sourceforge.net/ 00:05:30.099 00:05:30.099 00:05:30.099 Suite: components_suite 00:05:30.099 Test: vtophys_malloc_test ...passed 00:05:30.099 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.099 EAL: Restoring previous memory policy: 4 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.099 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.099 EAL: request: mp_malloc_sync 00:05:30.099 EAL: No shared files mode enabled, IPC is disabled 00:05:30.099 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.099 EAL: Trying to obtain current memory policy. 00:05:30.099 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.361 EAL: Restoring previous memory policy: 4 00:05:30.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.361 EAL: request: mp_malloc_sync 00:05:30.361 EAL: No shared files mode enabled, IPC is disabled 00:05:30.361 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.361 EAL: request: mp_malloc_sync 00:05:30.361 EAL: No shared files mode enabled, IPC is disabled 00:05:30.361 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.361 EAL: Trying to obtain current memory policy. 00:05:30.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.361 EAL: Restoring previous memory policy: 4 00:05:30.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.361 EAL: request: mp_malloc_sync 00:05:30.361 EAL: No shared files mode enabled, IPC is disabled 00:05:30.361 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.361 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.361 EAL: request: mp_malloc_sync 00:05:30.361 EAL: No shared files mode enabled, IPC is disabled 00:05:30.361 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.361 EAL: Trying to obtain current memory policy. 00:05:30.361 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.623 EAL: Restoring previous memory policy: 4 00:05:30.623 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.623 EAL: request: mp_malloc_sync 00:05:30.623 EAL: No shared files mode enabled, IPC is disabled 00:05:30.623 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.623 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.884 EAL: request: mp_malloc_sync 00:05:30.884 EAL: No shared files mode enabled, IPC is disabled 00:05:30.884 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:30.884 passed 00:05:30.884 00:05:30.884 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.884 suites 1 1 n/a 0 0 00:05:30.884 tests 2 2 2 0 0 00:05:30.884 asserts 497 497 497 0 n/a 00:05:30.884 00:05:30.884 Elapsed time = 0.655 seconds 00:05:30.884 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.884 EAL: request: mp_malloc_sync 00:05:30.884 EAL: No shared files mode enabled, IPC is disabled 00:05:30.884 EAL: Heap on socket 0 was shrunk by 2MB 00:05:30.884 EAL: No shared files mode enabled, IPC is disabled 00:05:30.884 EAL: No shared files mode enabled, IPC is disabled 00:05:30.884 EAL: No shared files mode enabled, IPC is disabled 00:05:30.884 00:05:30.884 real 0m0.774s 00:05:30.884 user 0m0.406s 00:05:30.884 sys 0m0.345s 00:05:30.884 13:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.884 13:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.884 ************************************ 00:05:30.884 END TEST env_vtophys 00:05:30.884 ************************************ 00:05:30.884 13:16:28 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.884 13:16:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.884 13:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.884 13:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.884 ************************************ 00:05:30.884 START TEST env_pci 00:05:30.884 ************************************ 00:05:30.884 13:16:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.884 00:05:30.884 00:05:30.884 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.884 http://cunit.sourceforge.net/ 00:05:30.884 00:05:30.884 00:05:30.884 Suite: pci 00:05:30.884 Test: pci_hook ...[2024-07-26 13:16:28.222758] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 745208 has claimed it 00:05:30.884 EAL: Cannot find device (10000:00:01.0) 00:05:30.884 EAL: Failed to attach device on primary process 00:05:30.884 passed 00:05:30.884 00:05:30.884 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.884 suites 1 1 n/a 0 0 00:05:30.884 tests 1 1 1 0 0 00:05:30.884 asserts 25 25 25 0 n/a 00:05:30.884 00:05:30.884 Elapsed time = 0.032 seconds 00:05:30.884 00:05:30.884 real 0m0.051s 00:05:30.884 user 0m0.012s 00:05:30.884 sys 0m0.039s 00:05:30.884 13:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.884 13:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.885 ************************************ 00:05:30.885 END TEST env_pci 00:05:30.885 ************************************ 00:05:30.885 13:16:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:30.885 13:16:28 -- env/env.sh@15 -- # uname 00:05:30.885 13:16:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:30.885 13:16:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:30.885 13:16:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.885 13:16:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:30.885 13:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.885 13:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.885 ************************************ 00:05:30.885 START TEST env_dpdk_post_init 00:05:30.885 ************************************ 00:05:30.885 13:16:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.885 EAL: Detected CPU lcores: 128 00:05:30.885 EAL: Detected NUMA nodes: 2 00:05:30.885 EAL: Detected shared linkage of DPDK 00:05:30.885 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.146 EAL: Selected IOVA mode 'VA' 00:05:31.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.146 EAL: VFIO support initialized 00:05:31.146 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.146 EAL: Using IOMMU type 1 (Type 1) 00:05:31.146 EAL: Ignore mapping IO port bar(1) 00:05:31.447 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:31.447 EAL: Ignore mapping IO port bar(1) 00:05:31.447 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:31.727 EAL: Ignore mapping IO port bar(1) 00:05:31.727 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:31.988 EAL: Ignore mapping IO port bar(1) 00:05:31.988 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:31.988 EAL: Ignore mapping IO port bar(1) 00:05:32.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:32.250 EAL: Ignore mapping IO port bar(1) 00:05:32.511 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:32.511 EAL: Ignore mapping IO port bar(1) 00:05:32.511 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:32.772 EAL: Ignore mapping IO port bar(1) 00:05:32.772 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:33.033 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:33.295 EAL: Ignore mapping IO port bar(1) 00:05:33.295 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:33.555 EAL: Ignore mapping IO port bar(1) 00:05:33.555 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:33.555 EAL: Ignore mapping IO port bar(1) 00:05:33.817 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:33.817 EAL: Ignore mapping IO port bar(1) 00:05:34.078 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:34.078 EAL: Ignore mapping IO port bar(1) 00:05:34.078 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:34.339 EAL: Ignore mapping IO port bar(1) 00:05:34.339 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:34.599 EAL: Ignore mapping IO port bar(1) 00:05:34.600 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:34.860 EAL: Ignore mapping IO port bar(1) 00:05:34.860 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:34.860 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:34.860 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:35.121 Starting DPDK initialization... 00:05:35.121 Starting SPDK post initialization... 00:05:35.121 SPDK NVMe probe 00:05:35.121 Attaching to 0000:65:00.0 00:05:35.121 Attached to 0000:65:00.0 00:05:35.121 Cleaning up... 00:05:37.035 00:05:37.035 real 0m5.710s 00:05:37.035 user 0m0.174s 00:05:37.035 sys 0m0.075s 00:05:37.035 13:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.035 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.035 ************************************ 00:05:37.035 END TEST env_dpdk_post_init 00:05:37.035 ************************************ 00:05:37.035 13:16:34 -- env/env.sh@26 -- # uname 00:05:37.035 13:16:34 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.035 13:16:34 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.035 13:16:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.035 13:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.036 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.036 ************************************ 00:05:37.036 START TEST env_mem_callbacks 00:05:37.036 ************************************ 00:05:37.036 13:16:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.036 EAL: Detected CPU lcores: 128 00:05:37.036 EAL: Detected NUMA nodes: 2 00:05:37.036 EAL: Detected shared linkage of DPDK 00:05:37.036 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.036 EAL: Selected IOVA mode 'VA' 00:05:37.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.036 EAL: VFIO support initialized 00:05:37.036 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.036 00:05:37.036 00:05:37.036 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.036 http://cunit.sourceforge.net/ 00:05:37.036 00:05:37.036 00:05:37.036 Suite: memory 00:05:37.036 Test: test ... 00:05:37.036 register 0x200000200000 2097152 00:05:37.036 malloc 3145728 00:05:37.036 register 0x200000400000 4194304 00:05:37.036 buf 0x200000500000 len 3145728 PASSED 00:05:37.036 malloc 64 00:05:37.036 buf 0x2000004fff40 len 64 PASSED 00:05:37.036 malloc 4194304 00:05:37.036 register 0x200000800000 6291456 00:05:37.036 buf 0x200000a00000 len 4194304 PASSED 00:05:37.036 free 0x200000500000 3145728 00:05:37.036 free 0x2000004fff40 64 00:05:37.036 unregister 0x200000400000 4194304 PASSED 00:05:37.036 free 0x200000a00000 4194304 00:05:37.036 unregister 0x200000800000 6291456 PASSED 00:05:37.036 malloc 8388608 00:05:37.036 register 0x200000400000 10485760 00:05:37.036 buf 0x200000600000 len 8388608 PASSED 00:05:37.036 free 0x200000600000 8388608 00:05:37.036 unregister 0x200000400000 10485760 PASSED 00:05:37.036 passed 00:05:37.036 00:05:37.036 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.036 suites 1 1 n/a 0 0 00:05:37.036 tests 1 1 1 0 0 00:05:37.036 asserts 15 15 15 0 n/a 00:05:37.036 00:05:37.036 Elapsed time = 0.006 seconds 00:05:37.036 00:05:37.036 real 0m0.061s 00:05:37.036 user 0m0.019s 00:05:37.036 sys 0m0.041s 00:05:37.036 13:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.036 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.036 ************************************ 00:05:37.036 END TEST env_mem_callbacks 00:05:37.036 ************************************ 00:05:37.036 00:05:37.036 real 0m7.149s 00:05:37.036 user 0m0.934s 00:05:37.036 sys 0m0.772s 00:05:37.036 13:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.036 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.036 ************************************ 00:05:37.036 END TEST env 00:05:37.036 ************************************ 00:05:37.036 13:16:34 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.036 13:16:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.036 13:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.036 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.036 ************************************ 00:05:37.036 START TEST rpc 00:05:37.036 ************************************ 00:05:37.036 13:16:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.036 * Looking for test storage... 00:05:37.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.036 13:16:34 -- rpc/rpc.sh@65 -- # spdk_pid=746660 00:05:37.036 13:16:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.036 13:16:34 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:37.036 13:16:34 -- rpc/rpc.sh@67 -- # waitforlisten 746660 00:05:37.036 13:16:34 -- common/autotest_common.sh@819 -- # '[' -z 746660 ']' 00:05:37.036 13:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.036 13:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.036 13:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.036 13:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.036 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.036 [2024-07-26 13:16:34.380579] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:37.036 [2024-07-26 13:16:34.380658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746660 ] 00:05:37.036 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.036 [2024-07-26 13:16:34.445814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.036 [2024-07-26 13:16:34.482477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.036 [2024-07-26 13:16:34.482617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.036 [2024-07-26 13:16:34.482627] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 746660' to capture a snapshot of events at runtime. 00:05:37.036 [2024-07-26 13:16:34.482636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid746660 for offline analysis/debug. 00:05:37.036 [2024-07-26 13:16:34.482668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.980 13:16:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.980 13:16:35 -- common/autotest_common.sh@852 -- # return 0 00:05:37.980 13:16:35 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.980 13:16:35 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.980 13:16:35 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.980 13:16:35 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.980 13:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.980 13:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 ************************************ 00:05:37.980 START TEST rpc_integrity 00:05:37.980 ************************************ 00:05:37.980 13:16:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:37.980 13:16:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.980 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.980 13:16:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.980 13:16:35 -- rpc/rpc.sh@13 -- # jq length 00:05:37.980 13:16:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.980 13:16:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.980 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.980 13:16:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.980 13:16:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.980 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.980 13:16:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.980 { 00:05:37.980 "name": "Malloc0", 00:05:37.980 "aliases": [ 00:05:37.980 "8fc08e4c-643e-4797-b1e8-a731fd0f7a73" 00:05:37.980 ], 00:05:37.980 "product_name": "Malloc disk", 00:05:37.980 "block_size": 512, 00:05:37.980 "num_blocks": 16384, 00:05:37.980 "uuid": "8fc08e4c-643e-4797-b1e8-a731fd0f7a73", 00:05:37.980 "assigned_rate_limits": { 00:05:37.980 "rw_ios_per_sec": 0, 00:05:37.980 "rw_mbytes_per_sec": 0, 00:05:37.980 "r_mbytes_per_sec": 0, 00:05:37.980 "w_mbytes_per_sec": 0 00:05:37.980 }, 00:05:37.980 "claimed": false, 00:05:37.980 "zoned": false, 00:05:37.980 "supported_io_types": { 00:05:37.980 "read": true, 00:05:37.980 "write": true, 00:05:37.980 "unmap": true, 00:05:37.980 "write_zeroes": true, 00:05:37.980 "flush": true, 00:05:37.980 "reset": true, 00:05:37.980 "compare": false, 00:05:37.980 "compare_and_write": false, 00:05:37.980 "abort": true, 00:05:37.980 "nvme_admin": false, 00:05:37.980 "nvme_io": false 00:05:37.980 }, 00:05:37.980 "memory_domains": [ 00:05:37.980 { 00:05:37.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.980 "dma_device_type": 2 00:05:37.980 } 00:05:37.980 ], 00:05:37.980 "driver_specific": {} 00:05:37.980 } 00:05:37.980 ]' 00:05:37.980 13:16:35 -- rpc/rpc.sh@17 -- # jq length 00:05:37.980 13:16:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.980 13:16:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.980 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 [2024-07-26 13:16:35.239021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.980 [2024-07-26 13:16:35.239052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.980 [2024-07-26 13:16:35.239064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b2a90 00:05:37.980 [2024-07-26 13:16:35.239071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.980 [2024-07-26 13:16:35.240478] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.980 [2024-07-26 13:16:35.240498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.980 Passthru0 00:05:37.980 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.980 13:16:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.980 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.980 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.980 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.980 13:16:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.980 { 00:05:37.980 "name": "Malloc0", 00:05:37.980 "aliases": [ 00:05:37.980 "8fc08e4c-643e-4797-b1e8-a731fd0f7a73" 00:05:37.980 ], 00:05:37.981 "product_name": "Malloc disk", 00:05:37.981 "block_size": 512, 00:05:37.981 "num_blocks": 16384, 00:05:37.981 "uuid": "8fc08e4c-643e-4797-b1e8-a731fd0f7a73", 00:05:37.981 "assigned_rate_limits": { 00:05:37.981 "rw_ios_per_sec": 0, 00:05:37.981 "rw_mbytes_per_sec": 0, 00:05:37.981 "r_mbytes_per_sec": 0, 00:05:37.981 "w_mbytes_per_sec": 0 00:05:37.981 }, 00:05:37.981 "claimed": true, 00:05:37.981 "claim_type": "exclusive_write", 00:05:37.981 "zoned": false, 00:05:37.981 "supported_io_types": { 00:05:37.981 "read": true, 00:05:37.981 "write": true, 00:05:37.981 "unmap": true, 00:05:37.981 "write_zeroes": true, 00:05:37.981 "flush": true, 00:05:37.981 "reset": true, 00:05:37.981 "compare": false, 00:05:37.981 "compare_and_write": false, 00:05:37.981 "abort": true, 00:05:37.981 "nvme_admin": false, 00:05:37.981 "nvme_io": false 00:05:37.981 }, 00:05:37.981 "memory_domains": [ 00:05:37.981 { 00:05:37.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.981 "dma_device_type": 2 00:05:37.981 } 00:05:37.981 ], 00:05:37.981 "driver_specific": {} 00:05:37.981 }, 00:05:37.981 { 00:05:37.981 "name": "Passthru0", 00:05:37.981 "aliases": [ 00:05:37.981 "1871d932-0850-5a77-8af9-e27e68aebdf4" 00:05:37.981 ], 00:05:37.981 "product_name": "passthru", 00:05:37.981 "block_size": 512, 00:05:37.981 "num_blocks": 16384, 00:05:37.981 "uuid": "1871d932-0850-5a77-8af9-e27e68aebdf4", 00:05:37.981 "assigned_rate_limits": { 00:05:37.981 "rw_ios_per_sec": 0, 00:05:37.981 "rw_mbytes_per_sec": 0, 00:05:37.981 "r_mbytes_per_sec": 0, 00:05:37.981 "w_mbytes_per_sec": 0 00:05:37.981 }, 00:05:37.981 "claimed": false, 00:05:37.981 "zoned": false, 00:05:37.981 "supported_io_types": { 00:05:37.981 "read": true, 00:05:37.981 "write": true, 00:05:37.981 "unmap": true, 00:05:37.981 "write_zeroes": true, 00:05:37.981 "flush": true, 00:05:37.981 "reset": true, 00:05:37.981 "compare": false, 00:05:37.981 "compare_and_write": false, 00:05:37.981 "abort": true, 00:05:37.981 "nvme_admin": false, 00:05:37.981 "nvme_io": false 00:05:37.981 }, 00:05:37.981 "memory_domains": [ 00:05:37.981 { 00:05:37.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.981 "dma_device_type": 2 00:05:37.981 } 00:05:37.981 ], 00:05:37.981 "driver_specific": { 00:05:37.981 "passthru": { 00:05:37.981 "name": "Passthru0", 00:05:37.981 "base_bdev_name": "Malloc0" 00:05:37.981 } 00:05:37.981 } 00:05:37.981 } 00:05:37.981 ]' 00:05:37.981 13:16:35 -- rpc/rpc.sh@21 -- # jq length 00:05:37.981 13:16:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.981 13:16:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.981 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.981 13:16:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.981 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.981 13:16:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.981 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.981 13:16:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.981 13:16:35 -- rpc/rpc.sh@26 -- # jq length 00:05:37.981 13:16:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.981 00:05:37.981 real 0m0.224s 00:05:37.981 user 0m0.132s 00:05:37.981 sys 0m0.026s 00:05:37.981 13:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 ************************************ 00:05:37.981 END TEST rpc_integrity 00:05:37.981 ************************************ 00:05:37.981 13:16:35 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.981 13:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.981 13:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 ************************************ 00:05:37.981 START TEST rpc_plugins 00:05:37.981 ************************************ 00:05:37.981 13:16:35 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:37.981 13:16:35 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.981 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.981 13:16:35 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.981 13:16:35 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.981 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.981 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:37.981 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.981 13:16:35 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.981 { 00:05:37.981 "name": "Malloc1", 00:05:37.981 "aliases": [ 00:05:37.981 "0ba91a4a-6bd9-4191-9026-b31d25876cca" 00:05:37.981 ], 00:05:37.981 "product_name": "Malloc disk", 00:05:37.981 "block_size": 4096, 00:05:37.981 "num_blocks": 256, 00:05:37.981 "uuid": "0ba91a4a-6bd9-4191-9026-b31d25876cca", 00:05:37.981 "assigned_rate_limits": { 00:05:37.981 "rw_ios_per_sec": 0, 00:05:37.981 "rw_mbytes_per_sec": 0, 00:05:37.981 "r_mbytes_per_sec": 0, 00:05:37.981 "w_mbytes_per_sec": 0 00:05:37.981 }, 00:05:37.981 "claimed": false, 00:05:37.981 "zoned": false, 00:05:37.981 "supported_io_types": { 00:05:37.981 "read": true, 00:05:37.981 "write": true, 00:05:37.981 "unmap": true, 00:05:37.981 "write_zeroes": true, 00:05:37.981 "flush": true, 00:05:37.981 "reset": true, 00:05:37.981 "compare": false, 00:05:37.981 "compare_and_write": false, 00:05:37.981 "abort": true, 00:05:37.981 "nvme_admin": false, 00:05:37.981 "nvme_io": false 00:05:37.981 }, 00:05:37.981 "memory_domains": [ 00:05:37.981 { 00:05:37.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.981 "dma_device_type": 2 00:05:37.981 } 00:05:37.981 ], 00:05:37.981 "driver_specific": {} 00:05:37.981 } 00:05:37.981 ]' 00:05:37.981 13:16:35 -- rpc/rpc.sh@32 -- # jq length 00:05:38.242 13:16:35 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.242 13:16:35 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.242 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.243 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.243 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.243 13:16:35 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.243 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.243 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.243 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.243 13:16:35 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.243 13:16:35 -- rpc/rpc.sh@36 -- # jq length 00:05:38.243 13:16:35 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.243 00:05:38.243 real 0m0.144s 00:05:38.243 user 0m0.089s 00:05:38.243 sys 0m0.016s 00:05:38.243 13:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.243 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.243 ************************************ 00:05:38.243 END TEST rpc_plugins 00:05:38.243 ************************************ 00:05:38.243 13:16:35 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.243 13:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.243 13:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.243 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.243 ************************************ 00:05:38.243 START TEST rpc_trace_cmd_test 00:05:38.243 ************************************ 00:05:38.243 13:16:35 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:38.243 13:16:35 -- rpc/rpc.sh@40 -- # local info 00:05:38.243 13:16:35 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.243 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.243 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.243 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.243 13:16:35 -- rpc/rpc.sh@42 -- # info='{ 00:05:38.243 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid746660", 00:05:38.243 "tpoint_group_mask": "0x8", 00:05:38.243 "iscsi_conn": { 00:05:38.243 "mask": "0x2", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "scsi": { 00:05:38.243 "mask": "0x4", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "bdev": { 00:05:38.243 "mask": "0x8", 00:05:38.243 "tpoint_mask": "0xffffffffffffffff" 00:05:38.243 }, 00:05:38.243 "nvmf_rdma": { 00:05:38.243 "mask": "0x10", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "nvmf_tcp": { 00:05:38.243 "mask": "0x20", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "ftl": { 00:05:38.243 "mask": "0x40", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "blobfs": { 00:05:38.243 "mask": "0x80", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "dsa": { 00:05:38.243 "mask": "0x200", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "thread": { 00:05:38.243 "mask": "0x400", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "nvme_pcie": { 00:05:38.243 "mask": "0x800", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "iaa": { 00:05:38.243 "mask": "0x1000", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "nvme_tcp": { 00:05:38.243 "mask": "0x2000", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 }, 00:05:38.243 "bdev_nvme": { 00:05:38.243 "mask": "0x4000", 00:05:38.243 "tpoint_mask": "0x0" 00:05:38.243 } 00:05:38.243 }' 00:05:38.243 13:16:35 -- rpc/rpc.sh@43 -- # jq length 00:05:38.243 13:16:35 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:38.243 13:16:35 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.243 13:16:35 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.243 13:16:35 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.504 13:16:35 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.504 13:16:35 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.504 13:16:35 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.504 13:16:35 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.504 13:16:35 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.504 00:05:38.504 real 0m0.218s 00:05:38.504 user 0m0.180s 00:05:38.504 sys 0m0.028s 00:05:38.504 13:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.504 ************************************ 00:05:38.504 END TEST rpc_trace_cmd_test 00:05:38.504 ************************************ 00:05:38.504 13:16:35 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.504 13:16:35 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.504 13:16:35 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.504 13:16:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.504 13:16:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.504 ************************************ 00:05:38.504 START TEST rpc_daemon_integrity 00:05:38.504 ************************************ 00:05:38.504 13:16:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:38.504 13:16:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.504 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.504 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.504 13:16:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.504 13:16:35 -- rpc/rpc.sh@13 -- # jq length 00:05:38.504 13:16:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.504 13:16:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.504 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.504 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.504 13:16:35 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.504 13:16:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.504 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.504 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.504 13:16:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.504 { 00:05:38.504 "name": "Malloc2", 00:05:38.504 "aliases": [ 00:05:38.504 "e4759e1c-010d-4977-b50e-7693c7395b0d" 00:05:38.504 ], 00:05:38.504 "product_name": "Malloc disk", 00:05:38.504 "block_size": 512, 00:05:38.504 "num_blocks": 16384, 00:05:38.504 "uuid": "e4759e1c-010d-4977-b50e-7693c7395b0d", 00:05:38.504 "assigned_rate_limits": { 00:05:38.504 "rw_ios_per_sec": 0, 00:05:38.504 "rw_mbytes_per_sec": 0, 00:05:38.504 "r_mbytes_per_sec": 0, 00:05:38.504 "w_mbytes_per_sec": 0 00:05:38.504 }, 00:05:38.504 "claimed": false, 00:05:38.504 "zoned": false, 00:05:38.504 "supported_io_types": { 00:05:38.504 "read": true, 00:05:38.504 "write": true, 00:05:38.504 "unmap": true, 00:05:38.504 "write_zeroes": true, 00:05:38.504 "flush": true, 00:05:38.504 "reset": true, 00:05:38.504 "compare": false, 00:05:38.504 "compare_and_write": false, 00:05:38.504 "abort": true, 00:05:38.504 "nvme_admin": false, 00:05:38.504 "nvme_io": false 00:05:38.504 }, 00:05:38.504 "memory_domains": [ 00:05:38.504 { 00:05:38.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.504 "dma_device_type": 2 00:05:38.504 } 00:05:38.504 ], 00:05:38.504 "driver_specific": {} 00:05:38.504 } 00:05:38.504 ]' 00:05:38.504 13:16:35 -- rpc/rpc.sh@17 -- # jq length 00:05:38.504 13:16:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.504 13:16:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.504 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.504 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.505 [2024-07-26 13:16:35.977025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.505 [2024-07-26 13:16:35.977054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.505 [2024-07-26 13:16:35.977069] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17b3f20 00:05:38.505 [2024-07-26 13:16:35.977076] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.766 [2024-07-26 13:16:35.978286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.766 [2024-07-26 13:16:35.978305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.766 Passthru0 00:05:38.766 13:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.766 13:16:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.766 13:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.766 13:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 13:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.766 13:16:36 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.766 { 00:05:38.766 "name": "Malloc2", 00:05:38.766 "aliases": [ 00:05:38.766 "e4759e1c-010d-4977-b50e-7693c7395b0d" 00:05:38.766 ], 00:05:38.766 "product_name": "Malloc disk", 00:05:38.766 "block_size": 512, 00:05:38.766 "num_blocks": 16384, 00:05:38.766 "uuid": "e4759e1c-010d-4977-b50e-7693c7395b0d", 00:05:38.766 "assigned_rate_limits": { 00:05:38.766 "rw_ios_per_sec": 0, 00:05:38.766 "rw_mbytes_per_sec": 0, 00:05:38.766 "r_mbytes_per_sec": 0, 00:05:38.766 "w_mbytes_per_sec": 0 00:05:38.766 }, 00:05:38.766 "claimed": true, 00:05:38.766 "claim_type": "exclusive_write", 00:05:38.766 "zoned": false, 00:05:38.766 "supported_io_types": { 00:05:38.766 "read": true, 00:05:38.766 "write": true, 00:05:38.766 "unmap": true, 00:05:38.766 "write_zeroes": true, 00:05:38.766 "flush": true, 00:05:38.766 "reset": true, 00:05:38.766 "compare": false, 00:05:38.766 "compare_and_write": false, 00:05:38.766 "abort": true, 00:05:38.766 "nvme_admin": false, 00:05:38.766 "nvme_io": false 00:05:38.766 }, 00:05:38.766 "memory_domains": [ 00:05:38.766 { 00:05:38.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.766 "dma_device_type": 2 00:05:38.766 } 00:05:38.766 ], 00:05:38.766 "driver_specific": {} 00:05:38.766 }, 00:05:38.766 { 00:05:38.766 "name": "Passthru0", 00:05:38.766 "aliases": [ 00:05:38.766 "c0b323a1-548e-5b39-b085-211873875923" 00:05:38.766 ], 00:05:38.766 "product_name": "passthru", 00:05:38.766 "block_size": 512, 00:05:38.766 "num_blocks": 16384, 00:05:38.766 "uuid": "c0b323a1-548e-5b39-b085-211873875923", 00:05:38.766 "assigned_rate_limits": { 00:05:38.766 "rw_ios_per_sec": 0, 00:05:38.766 "rw_mbytes_per_sec": 0, 00:05:38.766 "r_mbytes_per_sec": 0, 00:05:38.766 "w_mbytes_per_sec": 0 00:05:38.766 }, 00:05:38.766 "claimed": false, 00:05:38.766 "zoned": false, 00:05:38.766 "supported_io_types": { 00:05:38.766 "read": true, 00:05:38.766 "write": true, 00:05:38.766 "unmap": true, 00:05:38.766 "write_zeroes": true, 00:05:38.766 "flush": true, 00:05:38.766 "reset": true, 00:05:38.766 "compare": false, 00:05:38.766 "compare_and_write": false, 00:05:38.766 "abort": true, 00:05:38.766 "nvme_admin": false, 00:05:38.766 "nvme_io": false 00:05:38.766 }, 00:05:38.766 "memory_domains": [ 00:05:38.766 { 00:05:38.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.766 "dma_device_type": 2 00:05:38.766 } 00:05:38.766 ], 00:05:38.766 "driver_specific": { 00:05:38.766 "passthru": { 00:05:38.766 "name": "Passthru0", 00:05:38.766 "base_bdev_name": "Malloc2" 00:05:38.766 } 00:05:38.766 } 00:05:38.766 } 00:05:38.766 ]' 00:05:38.766 13:16:36 -- rpc/rpc.sh@21 -- # jq length 00:05:38.766 13:16:36 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.766 13:16:36 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.766 13:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.766 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 13:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.766 13:16:36 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.766 13:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.766 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 13:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.766 13:16:36 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.766 13:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.766 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 13:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.766 13:16:36 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.766 13:16:36 -- rpc/rpc.sh@26 -- # jq length 00:05:38.766 13:16:36 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.766 00:05:38.766 real 0m0.257s 00:05:38.766 user 0m0.160s 00:05:38.766 sys 0m0.037s 00:05:38.766 13:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.766 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:38.766 ************************************ 00:05:38.766 END TEST rpc_daemon_integrity 00:05:38.766 ************************************ 00:05:38.766 13:16:36 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.766 13:16:36 -- rpc/rpc.sh@84 -- # killprocess 746660 00:05:38.766 13:16:36 -- common/autotest_common.sh@926 -- # '[' -z 746660 ']' 00:05:38.766 13:16:36 -- common/autotest_common.sh@930 -- # kill -0 746660 00:05:38.766 13:16:36 -- common/autotest_common.sh@931 -- # uname 00:05:38.766 13:16:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.766 13:16:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 746660 00:05:38.766 13:16:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.766 13:16:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.766 13:16:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 746660' 00:05:38.766 killing process with pid 746660 00:05:38.766 13:16:36 -- common/autotest_common.sh@945 -- # kill 746660 00:05:38.766 13:16:36 -- common/autotest_common.sh@950 -- # wait 746660 00:05:39.026 00:05:39.026 real 0m2.163s 00:05:39.026 user 0m2.762s 00:05:39.026 sys 0m0.607s 00:05:39.026 13:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.026 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.026 ************************************ 00:05:39.026 END TEST rpc 00:05:39.026 ************************************ 00:05:39.026 13:16:36 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:39.026 13:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.026 13:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.026 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.026 ************************************ 00:05:39.026 START TEST rpc_client 00:05:39.026 ************************************ 00:05:39.026 13:16:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:39.288 * Looking for test storage... 00:05:39.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:39.288 13:16:36 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:39.288 OK 00:05:39.288 13:16:36 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:39.288 00:05:39.288 real 0m0.118s 00:05:39.288 user 0m0.050s 00:05:39.288 sys 0m0.075s 00:05:39.288 13:16:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.288 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.288 ************************************ 00:05:39.288 END TEST rpc_client 00:05:39.288 ************************************ 00:05:39.288 13:16:36 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:39.288 13:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.288 13:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.288 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.288 ************************************ 00:05:39.288 START TEST json_config 00:05:39.288 ************************************ 00:05:39.288 13:16:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:39.288 13:16:36 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.288 13:16:36 -- nvmf/common.sh@7 -- # uname -s 00:05:39.288 13:16:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.288 13:16:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.288 13:16:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.288 13:16:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.288 13:16:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.288 13:16:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.288 13:16:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.288 13:16:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.288 13:16:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.288 13:16:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.288 13:16:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.288 13:16:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:39.288 13:16:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.288 13:16:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.288 13:16:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.288 13:16:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.288 13:16:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.288 13:16:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.288 13:16:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.288 13:16:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.288 13:16:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.288 13:16:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.288 13:16:36 -- paths/export.sh@5 -- # export PATH 00:05:39.288 13:16:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.288 13:16:36 -- nvmf/common.sh@46 -- # : 0 00:05:39.288 13:16:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:39.289 13:16:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:39.289 13:16:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:39.289 13:16:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.289 13:16:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.289 13:16:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:39.289 13:16:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:39.289 13:16:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:39.289 13:16:36 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:39.289 13:16:36 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:39.289 13:16:36 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:39.289 13:16:36 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:39.289 13:16:36 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:39.289 13:16:36 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:39.289 13:16:36 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:39.289 13:16:36 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:39.289 13:16:36 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:39.289 13:16:36 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:39.289 13:16:36 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.289 13:16:36 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:39.289 INFO: JSON configuration test init 00:05:39.289 13:16:36 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:39.289 13:16:36 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:39.289 13:16:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.289 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.289 13:16:36 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:39.289 13:16:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.289 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.289 13:16:36 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:39.289 13:16:36 -- json_config/json_config.sh@98 -- # local app=target 00:05:39.289 13:16:36 -- json_config/json_config.sh@99 -- # shift 00:05:39.289 13:16:36 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:39.289 13:16:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:39.289 13:16:36 -- json_config/json_config.sh@111 -- # app_pid[$app]=747214 00:05:39.289 13:16:36 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:39.289 Waiting for target to run... 00:05:39.289 13:16:36 -- json_config/json_config.sh@114 -- # waitforlisten 747214 /var/tmp/spdk_tgt.sock 00:05:39.289 13:16:36 -- common/autotest_common.sh@819 -- # '[' -z 747214 ']' 00:05:39.289 13:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.289 13:16:36 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:39.289 13:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.289 13:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.289 13:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.289 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:39.289 [2024-07-26 13:16:36.761328] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:39.289 [2024-07-26 13:16:36.761405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747214 ] 00:05:39.550 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.812 [2024-07-26 13:16:37.073321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.812 [2024-07-26 13:16:37.092629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.812 [2024-07-26 13:16:37.092785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.073 13:16:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.073 13:16:37 -- common/autotest_common.sh@852 -- # return 0 00:05:40.073 13:16:37 -- json_config/json_config.sh@115 -- # echo '' 00:05:40.073 00:05:40.073 13:16:37 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:40.073 13:16:37 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:40.073 13:16:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:40.073 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.073 13:16:37 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:40.073 13:16:37 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:40.073 13:16:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.073 13:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:40.334 13:16:37 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:40.334 13:16:37 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:40.334 13:16:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.594 13:16:38 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:40.595 13:16:38 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:40.595 13:16:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:40.595 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.595 13:16:38 -- json_config/json_config.sh@48 -- # local ret=0 00:05:40.595 13:16:38 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:40.595 13:16:38 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:40.595 13:16:38 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:40.595 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:40.595 13:16:38 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:40.856 13:16:38 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:40.856 13:16:38 -- json_config/json_config.sh@51 -- # local get_types 00:05:40.856 13:16:38 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:40.856 13:16:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.856 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.856 13:16:38 -- json_config/json_config.sh@58 -- # return 0 00:05:40.856 13:16:38 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:40.856 13:16:38 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:40.856 13:16:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:40.856 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.856 13:16:38 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:40.856 13:16:38 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:40.856 13:16:38 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.856 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.117 MallocForNvmf0 00:05:41.117 13:16:38 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.117 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.117 MallocForNvmf1 00:05:41.117 13:16:38 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.117 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.378 [2024-07-26 13:16:38.649691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.378 13:16:38 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.378 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.378 13:16:38 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.378 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.640 13:16:38 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.640 13:16:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.640 13:16:39 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.640 13:16:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.901 [2024-07-26 13:16:39.195518] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.901 13:16:39 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:41.901 13:16:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.901 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.901 13:16:39 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:41.901 13:16:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.901 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.901 13:16:39 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:41.901 13:16:39 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.901 13:16:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:42.163 MallocBdevForConfigChangeCheck 00:05:42.163 13:16:39 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:42.163 13:16:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.163 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:05:42.163 13:16:39 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:42.163 13:16:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.424 13:16:39 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:42.424 INFO: shutting down applications... 00:05:42.424 13:16:39 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:42.424 13:16:39 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:42.424 13:16:39 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:42.424 13:16:39 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:42.685 Calling clear_iscsi_subsystem 00:05:42.685 Calling clear_nvmf_subsystem 00:05:42.685 Calling clear_nbd_subsystem 00:05:42.685 Calling clear_ublk_subsystem 00:05:42.685 Calling clear_vhost_blk_subsystem 00:05:42.685 Calling clear_vhost_scsi_subsystem 00:05:42.685 Calling clear_scheduler_subsystem 00:05:42.685 Calling clear_bdev_subsystem 00:05:42.685 Calling clear_accel_subsystem 00:05:42.685 Calling clear_vmd_subsystem 00:05:42.685 Calling clear_sock_subsystem 00:05:42.685 Calling clear_iobuf_subsystem 00:05:42.685 13:16:40 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:42.685 13:16:40 -- json_config/json_config.sh@396 -- # count=100 00:05:42.685 13:16:40 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:42.685 13:16:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.685 13:16:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:42.685 13:16:40 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:43.258 13:16:40 -- json_config/json_config.sh@398 -- # break 00:05:43.258 13:16:40 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:43.258 13:16:40 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:43.258 13:16:40 -- json_config/json_config.sh@120 -- # local app=target 00:05:43.258 13:16:40 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:43.258 13:16:40 -- json_config/json_config.sh@124 -- # [[ -n 747214 ]] 00:05:43.258 13:16:40 -- json_config/json_config.sh@127 -- # kill -SIGINT 747214 00:05:43.258 13:16:40 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:43.258 13:16:40 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.258 13:16:40 -- json_config/json_config.sh@130 -- # kill -0 747214 00:05:43.258 13:16:40 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:43.520 13:16:40 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:43.520 13:16:40 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.520 13:16:40 -- json_config/json_config.sh@130 -- # kill -0 747214 00:05:43.520 13:16:40 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:43.520 13:16:40 -- json_config/json_config.sh@132 -- # break 00:05:43.520 13:16:40 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:43.520 13:16:40 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:43.520 SPDK target shutdown done 00:05:43.520 13:16:40 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:43.520 INFO: relaunching applications... 00:05:43.520 13:16:40 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.520 13:16:40 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.520 13:16:40 -- json_config/json_config.sh@99 -- # shift 00:05:43.520 13:16:40 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.520 13:16:40 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.520 13:16:40 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.520 13:16:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.520 13:16:40 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.520 13:16:40 -- json_config/json_config.sh@111 -- # app_pid[$app]=748348 00:05:43.520 13:16:40 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.520 Waiting for target to run... 00:05:43.520 13:16:40 -- json_config/json_config.sh@114 -- # waitforlisten 748348 /var/tmp/spdk_tgt.sock 00:05:43.520 13:16:40 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.520 13:16:40 -- common/autotest_common.sh@819 -- # '[' -z 748348 ']' 00:05:43.520 13:16:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.520 13:16:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.520 13:16:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.520 13:16:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.520 13:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:43.520 [2024-07-26 13:16:40.983243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:43.520 [2024-07-26 13:16:40.983305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748348 ] 00:05:43.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.043 [2024-07-26 13:16:41.268148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.043 [2024-07-26 13:16:41.287445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.043 [2024-07-26 13:16:41.287606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.305 [2024-07-26 13:16:41.752183] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.566 [2024-07-26 13:16:41.784562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.139 13:16:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.139 13:16:42 -- common/autotest_common.sh@852 -- # return 0 00:05:45.139 13:16:42 -- json_config/json_config.sh@115 -- # echo '' 00:05:45.139 00:05:45.139 13:16:42 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:45.139 13:16:42 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.139 INFO: Checking if target configuration is the same... 00:05:45.139 13:16:42 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.139 13:16:42 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:45.139 13:16:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.139 + '[' 2 -ne 2 ']' 00:05:45.139 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.139 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.139 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.139 +++ basename /dev/fd/62 00:05:45.139 ++ mktemp /tmp/62.XXX 00:05:45.139 + tmp_file_1=/tmp/62.tCM 00:05:45.139 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.139 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.139 + tmp_file_2=/tmp/spdk_tgt_config.json.T59 00:05:45.140 + ret=0 00:05:45.140 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.401 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.401 + diff -u /tmp/62.tCM /tmp/spdk_tgt_config.json.T59 00:05:45.401 + echo 'INFO: JSON config files are the same' 00:05:45.401 INFO: JSON config files are the same 00:05:45.401 + rm /tmp/62.tCM /tmp/spdk_tgt_config.json.T59 00:05:45.401 + exit 0 00:05:45.401 13:16:42 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:45.401 13:16:42 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.401 INFO: changing configuration and checking if this can be detected... 00:05:45.401 13:16:42 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.401 13:16:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.662 13:16:42 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.662 13:16:42 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:45.662 13:16:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.662 + '[' 2 -ne 2 ']' 00:05:45.662 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.662 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.662 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.662 +++ basename /dev/fd/62 00:05:45.662 ++ mktemp /tmp/62.XXX 00:05:45.662 + tmp_file_1=/tmp/62.taG 00:05:45.662 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.662 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.662 + tmp_file_2=/tmp/spdk_tgt_config.json.bFT 00:05:45.662 + ret=0 00:05:45.662 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.924 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:45.924 + diff -u /tmp/62.taG /tmp/spdk_tgt_config.json.bFT 00:05:45.924 + ret=1 00:05:45.924 + echo '=== Start of file: /tmp/62.taG ===' 00:05:45.924 + cat /tmp/62.taG 00:05:45.924 + echo '=== End of file: /tmp/62.taG ===' 00:05:45.924 + echo '' 00:05:45.924 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bFT ===' 00:05:45.924 + cat /tmp/spdk_tgt_config.json.bFT 00:05:45.924 + echo '=== End of file: /tmp/spdk_tgt_config.json.bFT ===' 00:05:45.924 + echo '' 00:05:45.924 + rm /tmp/62.taG /tmp/spdk_tgt_config.json.bFT 00:05:45.924 + exit 1 00:05:45.924 13:16:43 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:45.924 INFO: configuration change detected. 00:05:45.924 13:16:43 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:45.924 13:16:43 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:45.924 13:16:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.924 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 13:16:43 -- json_config/json_config.sh@360 -- # local ret=0 00:05:45.924 13:16:43 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:45.924 13:16:43 -- json_config/json_config.sh@370 -- # [[ -n 748348 ]] 00:05:45.924 13:16:43 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:45.924 13:16:43 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:45.924 13:16:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.924 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 13:16:43 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:45.924 13:16:43 -- json_config/json_config.sh@246 -- # uname -s 00:05:45.924 13:16:43 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:45.924 13:16:43 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:45.924 13:16:43 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:45.924 13:16:43 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:45.924 13:16:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.924 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 13:16:43 -- json_config/json_config.sh@376 -- # killprocess 748348 00:05:45.924 13:16:43 -- common/autotest_common.sh@926 -- # '[' -z 748348 ']' 00:05:45.924 13:16:43 -- common/autotest_common.sh@930 -- # kill -0 748348 00:05:45.924 13:16:43 -- common/autotest_common.sh@931 -- # uname 00:05:45.924 13:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.924 13:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 748348 00:05:45.924 13:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.924 13:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.924 13:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 748348' 00:05:45.924 killing process with pid 748348 00:05:45.924 13:16:43 -- common/autotest_common.sh@945 -- # kill 748348 00:05:45.924 13:16:43 -- common/autotest_common.sh@950 -- # wait 748348 00:05:46.211 13:16:43 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.211 13:16:43 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:46.211 13:16:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.211 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.211 13:16:43 -- json_config/json_config.sh@381 -- # return 0 00:05:46.211 13:16:43 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:46.211 INFO: Success 00:05:46.211 00:05:46.211 real 0m7.061s 00:05:46.211 user 0m8.367s 00:05:46.211 sys 0m1.672s 00:05:46.211 13:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.211 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.211 ************************************ 00:05:46.211 END TEST json_config 00:05:46.211 ************************************ 00:05:46.475 13:16:43 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.475 13:16:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.475 13:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.475 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.475 ************************************ 00:05:46.475 START TEST json_config_extra_key 00:05:46.475 ************************************ 00:05:46.475 13:16:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.475 13:16:43 -- nvmf/common.sh@7 -- # uname -s 00:05:46.475 13:16:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.475 13:16:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.475 13:16:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.475 13:16:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.475 13:16:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.475 13:16:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.475 13:16:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.475 13:16:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.475 13:16:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.475 13:16:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.475 13:16:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.475 13:16:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.475 13:16:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.475 13:16:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.475 13:16:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.475 13:16:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.475 13:16:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.475 13:16:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.475 13:16:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.475 13:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.475 13:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.475 13:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.475 13:16:43 -- paths/export.sh@5 -- # export PATH 00:05:46.475 13:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.475 13:16:43 -- nvmf/common.sh@46 -- # : 0 00:05:46.475 13:16:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.475 13:16:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.475 13:16:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.475 13:16:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.475 13:16:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.475 13:16:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.475 13:16:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.475 13:16:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:46.475 INFO: launching applications... 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=748907 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:46.475 Waiting for target to run... 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 748907 /var/tmp/spdk_tgt.sock 00:05:46.475 13:16:43 -- common/autotest_common.sh@819 -- # '[' -z 748907 ']' 00:05:46.475 13:16:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.475 13:16:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.475 13:16:43 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.475 13:16:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.475 13:16:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.475 13:16:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.475 [2024-07-26 13:16:43.850349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.475 [2024-07-26 13:16:43.850436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748907 ] 00:05:46.475 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.737 [2024-07-26 13:16:44.123852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.737 [2024-07-26 13:16:44.139798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.737 [2024-07-26 13:16:44.139941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.310 13:16:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.310 13:16:44 -- common/autotest_common.sh@852 -- # return 0 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:47.310 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:47.310 INFO: shutting down applications... 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 748907 ]] 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 748907 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 748907 00:05:47.310 13:16:44 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 748907 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:47.883 SPDK target shutdown done 00:05:47.883 13:16:45 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:47.883 Success 00:05:47.883 00:05:47.883 real 0m1.426s 00:05:47.883 user 0m1.043s 00:05:47.883 sys 0m0.366s 00:05:47.883 13:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.883 13:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.883 ************************************ 00:05:47.883 END TEST json_config_extra_key 00:05:47.883 ************************************ 00:05:47.883 13:16:45 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.883 13:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.883 13:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.883 13:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.883 ************************************ 00:05:47.883 START TEST alias_rpc 00:05:47.883 ************************************ 00:05:47.883 13:16:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:47.883 * Looking for test storage... 00:05:47.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:47.883 13:16:45 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.883 13:16:45 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=749197 00:05:47.883 13:16:45 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 749197 00:05:47.883 13:16:45 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.883 13:16:45 -- common/autotest_common.sh@819 -- # '[' -z 749197 ']' 00:05:47.883 13:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.883 13:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.883 13:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.883 13:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.883 13:16:45 -- common/autotest_common.sh@10 -- # set +x 00:05:47.883 [2024-07-26 13:16:45.293395] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:47.883 [2024-07-26 13:16:45.293462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749197 ] 00:05:47.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.144 [2024-07-26 13:16:45.356689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.144 [2024-07-26 13:16:45.389662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.144 [2024-07-26 13:16:45.389808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.717 13:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.717 13:16:46 -- common/autotest_common.sh@852 -- # return 0 00:05:48.717 13:16:46 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:48.978 13:16:46 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 749197 00:05:48.978 13:16:46 -- common/autotest_common.sh@926 -- # '[' -z 749197 ']' 00:05:48.978 13:16:46 -- common/autotest_common.sh@930 -- # kill -0 749197 00:05:48.978 13:16:46 -- common/autotest_common.sh@931 -- # uname 00:05:48.978 13:16:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.978 13:16:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 749197 00:05:48.978 13:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.978 13:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.978 13:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 749197' 00:05:48.978 killing process with pid 749197 00:05:48.978 13:16:46 -- common/autotest_common.sh@945 -- # kill 749197 00:05:48.978 13:16:46 -- common/autotest_common.sh@950 -- # wait 749197 00:05:49.240 00:05:49.240 real 0m1.326s 00:05:49.240 user 0m1.455s 00:05:49.240 sys 0m0.367s 00:05:49.240 13:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.240 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.240 ************************************ 00:05:49.240 END TEST alias_rpc 00:05:49.240 ************************************ 00:05:49.240 13:16:46 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:49.240 13:16:46 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.240 13:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.240 13:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.240 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.240 ************************************ 00:05:49.240 START TEST spdkcli_tcp 00:05:49.240 ************************************ 00:05:49.240 13:16:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.240 * Looking for test storage... 00:05:49.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.240 13:16:46 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.240 13:16:46 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.240 13:16:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:49.240 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=749588 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@27 -- # waitforlisten 749588 00:05:49.240 13:16:46 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.240 13:16:46 -- common/autotest_common.sh@819 -- # '[' -z 749588 ']' 00:05:49.240 13:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.240 13:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.240 13:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.240 13:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.240 13:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:49.240 [2024-07-26 13:16:46.691879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.240 [2024-07-26 13:16:46.691949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749588 ] 00:05:49.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.502 [2024-07-26 13:16:46.758341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.502 [2024-07-26 13:16:46.794932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.502 [2024-07-26 13:16:46.795242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.502 [2024-07-26 13:16:46.795288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.074 13:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.074 13:16:47 -- common/autotest_common.sh@852 -- # return 0 00:05:50.074 13:16:47 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.074 13:16:47 -- spdkcli/tcp.sh@31 -- # socat_pid=749842 00:05:50.074 13:16:47 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.336 [ 00:05:50.336 "bdev_malloc_delete", 00:05:50.336 "bdev_malloc_create", 00:05:50.336 "bdev_null_resize", 00:05:50.336 "bdev_null_delete", 00:05:50.336 "bdev_null_create", 00:05:50.336 "bdev_nvme_cuse_unregister", 00:05:50.336 "bdev_nvme_cuse_register", 00:05:50.336 "bdev_opal_new_user", 00:05:50.336 "bdev_opal_set_lock_state", 00:05:50.336 "bdev_opal_delete", 00:05:50.336 "bdev_opal_get_info", 00:05:50.336 "bdev_opal_create", 00:05:50.336 "bdev_nvme_opal_revert", 00:05:50.336 "bdev_nvme_opal_init", 00:05:50.336 "bdev_nvme_send_cmd", 00:05:50.336 "bdev_nvme_get_path_iostat", 00:05:50.336 "bdev_nvme_get_mdns_discovery_info", 00:05:50.336 "bdev_nvme_stop_mdns_discovery", 00:05:50.336 "bdev_nvme_start_mdns_discovery", 00:05:50.336 "bdev_nvme_set_multipath_policy", 00:05:50.336 "bdev_nvme_set_preferred_path", 00:05:50.336 "bdev_nvme_get_io_paths", 00:05:50.336 "bdev_nvme_remove_error_injection", 00:05:50.336 "bdev_nvme_add_error_injection", 00:05:50.336 "bdev_nvme_get_discovery_info", 00:05:50.336 "bdev_nvme_stop_discovery", 00:05:50.336 "bdev_nvme_start_discovery", 00:05:50.336 "bdev_nvme_get_controller_health_info", 00:05:50.336 "bdev_nvme_disable_controller", 00:05:50.336 "bdev_nvme_enable_controller", 00:05:50.336 "bdev_nvme_reset_controller", 00:05:50.336 "bdev_nvme_get_transport_statistics", 00:05:50.336 "bdev_nvme_apply_firmware", 00:05:50.336 "bdev_nvme_detach_controller", 00:05:50.336 "bdev_nvme_get_controllers", 00:05:50.336 "bdev_nvme_attach_controller", 00:05:50.336 "bdev_nvme_set_hotplug", 00:05:50.336 "bdev_nvme_set_options", 00:05:50.336 "bdev_passthru_delete", 00:05:50.336 "bdev_passthru_create", 00:05:50.336 "bdev_lvol_grow_lvstore", 00:05:50.336 "bdev_lvol_get_lvols", 00:05:50.336 "bdev_lvol_get_lvstores", 00:05:50.336 "bdev_lvol_delete", 00:05:50.336 "bdev_lvol_set_read_only", 00:05:50.336 "bdev_lvol_resize", 00:05:50.336 "bdev_lvol_decouple_parent", 00:05:50.336 "bdev_lvol_inflate", 00:05:50.336 "bdev_lvol_rename", 00:05:50.336 "bdev_lvol_clone_bdev", 00:05:50.336 "bdev_lvol_clone", 00:05:50.336 "bdev_lvol_snapshot", 00:05:50.336 "bdev_lvol_create", 00:05:50.336 "bdev_lvol_delete_lvstore", 00:05:50.336 "bdev_lvol_rename_lvstore", 00:05:50.336 "bdev_lvol_create_lvstore", 00:05:50.336 "bdev_raid_set_options", 00:05:50.336 "bdev_raid_remove_base_bdev", 00:05:50.336 "bdev_raid_add_base_bdev", 00:05:50.336 "bdev_raid_delete", 00:05:50.336 "bdev_raid_create", 00:05:50.336 "bdev_raid_get_bdevs", 00:05:50.336 "bdev_error_inject_error", 00:05:50.336 "bdev_error_delete", 00:05:50.336 "bdev_error_create", 00:05:50.336 "bdev_split_delete", 00:05:50.336 "bdev_split_create", 00:05:50.336 "bdev_delay_delete", 00:05:50.336 "bdev_delay_create", 00:05:50.336 "bdev_delay_update_latency", 00:05:50.336 "bdev_zone_block_delete", 00:05:50.336 "bdev_zone_block_create", 00:05:50.336 "blobfs_create", 00:05:50.336 "blobfs_detect", 00:05:50.336 "blobfs_set_cache_size", 00:05:50.336 "bdev_aio_delete", 00:05:50.336 "bdev_aio_rescan", 00:05:50.336 "bdev_aio_create", 00:05:50.336 "bdev_ftl_set_property", 00:05:50.336 "bdev_ftl_get_properties", 00:05:50.336 "bdev_ftl_get_stats", 00:05:50.336 "bdev_ftl_unmap", 00:05:50.336 "bdev_ftl_unload", 00:05:50.336 "bdev_ftl_delete", 00:05:50.336 "bdev_ftl_load", 00:05:50.336 "bdev_ftl_create", 00:05:50.336 "bdev_virtio_attach_controller", 00:05:50.336 "bdev_virtio_scsi_get_devices", 00:05:50.336 "bdev_virtio_detach_controller", 00:05:50.336 "bdev_virtio_blk_set_hotplug", 00:05:50.336 "bdev_iscsi_delete", 00:05:50.336 "bdev_iscsi_create", 00:05:50.336 "bdev_iscsi_set_options", 00:05:50.336 "accel_error_inject_error", 00:05:50.336 "ioat_scan_accel_module", 00:05:50.336 "dsa_scan_accel_module", 00:05:50.336 "iaa_scan_accel_module", 00:05:50.336 "vfu_virtio_create_scsi_endpoint", 00:05:50.336 "vfu_virtio_scsi_remove_target", 00:05:50.336 "vfu_virtio_scsi_add_target", 00:05:50.336 "vfu_virtio_create_blk_endpoint", 00:05:50.336 "vfu_virtio_delete_endpoint", 00:05:50.336 "iscsi_set_options", 00:05:50.336 "iscsi_get_auth_groups", 00:05:50.336 "iscsi_auth_group_remove_secret", 00:05:50.336 "iscsi_auth_group_add_secret", 00:05:50.336 "iscsi_delete_auth_group", 00:05:50.336 "iscsi_create_auth_group", 00:05:50.336 "iscsi_set_discovery_auth", 00:05:50.336 "iscsi_get_options", 00:05:50.336 "iscsi_target_node_request_logout", 00:05:50.336 "iscsi_target_node_set_redirect", 00:05:50.336 "iscsi_target_node_set_auth", 00:05:50.336 "iscsi_target_node_add_lun", 00:05:50.336 "iscsi_get_connections", 00:05:50.336 "iscsi_portal_group_set_auth", 00:05:50.336 "iscsi_start_portal_group", 00:05:50.336 "iscsi_delete_portal_group", 00:05:50.336 "iscsi_create_portal_group", 00:05:50.336 "iscsi_get_portal_groups", 00:05:50.336 "iscsi_delete_target_node", 00:05:50.336 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.336 "iscsi_target_node_add_pg_ig_maps", 00:05:50.336 "iscsi_create_target_node", 00:05:50.336 "iscsi_get_target_nodes", 00:05:50.336 "iscsi_delete_initiator_group", 00:05:50.336 "iscsi_initiator_group_remove_initiators", 00:05:50.336 "iscsi_initiator_group_add_initiators", 00:05:50.336 "iscsi_create_initiator_group", 00:05:50.336 "iscsi_get_initiator_groups", 00:05:50.336 "nvmf_set_crdt", 00:05:50.336 "nvmf_set_config", 00:05:50.336 "nvmf_set_max_subsystems", 00:05:50.336 "nvmf_subsystem_get_listeners", 00:05:50.336 "nvmf_subsystem_get_qpairs", 00:05:50.336 "nvmf_subsystem_get_controllers", 00:05:50.336 "nvmf_get_stats", 00:05:50.336 "nvmf_get_transports", 00:05:50.336 "nvmf_create_transport", 00:05:50.336 "nvmf_get_targets", 00:05:50.336 "nvmf_delete_target", 00:05:50.336 "nvmf_create_target", 00:05:50.336 "nvmf_subsystem_allow_any_host", 00:05:50.336 "nvmf_subsystem_remove_host", 00:05:50.336 "nvmf_subsystem_add_host", 00:05:50.336 "nvmf_subsystem_remove_ns", 00:05:50.336 "nvmf_subsystem_add_ns", 00:05:50.336 "nvmf_subsystem_listener_set_ana_state", 00:05:50.336 "nvmf_discovery_get_referrals", 00:05:50.336 "nvmf_discovery_remove_referral", 00:05:50.336 "nvmf_discovery_add_referral", 00:05:50.336 "nvmf_subsystem_remove_listener", 00:05:50.336 "nvmf_subsystem_add_listener", 00:05:50.336 "nvmf_delete_subsystem", 00:05:50.336 "nvmf_create_subsystem", 00:05:50.336 "nvmf_get_subsystems", 00:05:50.336 "env_dpdk_get_mem_stats", 00:05:50.336 "nbd_get_disks", 00:05:50.336 "nbd_stop_disk", 00:05:50.336 "nbd_start_disk", 00:05:50.336 "ublk_recover_disk", 00:05:50.336 "ublk_get_disks", 00:05:50.336 "ublk_stop_disk", 00:05:50.336 "ublk_start_disk", 00:05:50.336 "ublk_destroy_target", 00:05:50.336 "ublk_create_target", 00:05:50.336 "virtio_blk_create_transport", 00:05:50.336 "virtio_blk_get_transports", 00:05:50.336 "vhost_controller_set_coalescing", 00:05:50.336 "vhost_get_controllers", 00:05:50.336 "vhost_delete_controller", 00:05:50.336 "vhost_create_blk_controller", 00:05:50.336 "vhost_scsi_controller_remove_target", 00:05:50.336 "vhost_scsi_controller_add_target", 00:05:50.336 "vhost_start_scsi_controller", 00:05:50.336 "vhost_create_scsi_controller", 00:05:50.336 "thread_set_cpumask", 00:05:50.336 "framework_get_scheduler", 00:05:50.336 "framework_set_scheduler", 00:05:50.336 "framework_get_reactors", 00:05:50.337 "thread_get_io_channels", 00:05:50.337 "thread_get_pollers", 00:05:50.337 "thread_get_stats", 00:05:50.337 "framework_monitor_context_switch", 00:05:50.337 "spdk_kill_instance", 00:05:50.337 "log_enable_timestamps", 00:05:50.337 "log_get_flags", 00:05:50.337 "log_clear_flag", 00:05:50.337 "log_set_flag", 00:05:50.337 "log_get_level", 00:05:50.337 "log_set_level", 00:05:50.337 "log_get_print_level", 00:05:50.337 "log_set_print_level", 00:05:50.337 "framework_enable_cpumask_locks", 00:05:50.337 "framework_disable_cpumask_locks", 00:05:50.337 "framework_wait_init", 00:05:50.337 "framework_start_init", 00:05:50.337 "scsi_get_devices", 00:05:50.337 "bdev_get_histogram", 00:05:50.337 "bdev_enable_histogram", 00:05:50.337 "bdev_set_qos_limit", 00:05:50.337 "bdev_set_qd_sampling_period", 00:05:50.337 "bdev_get_bdevs", 00:05:50.337 "bdev_reset_iostat", 00:05:50.337 "bdev_get_iostat", 00:05:50.337 "bdev_examine", 00:05:50.337 "bdev_wait_for_examine", 00:05:50.337 "bdev_set_options", 00:05:50.337 "notify_get_notifications", 00:05:50.337 "notify_get_types", 00:05:50.337 "accel_get_stats", 00:05:50.337 "accel_set_options", 00:05:50.337 "accel_set_driver", 00:05:50.337 "accel_crypto_key_destroy", 00:05:50.337 "accel_crypto_keys_get", 00:05:50.337 "accel_crypto_key_create", 00:05:50.337 "accel_assign_opc", 00:05:50.337 "accel_get_module_info", 00:05:50.337 "accel_get_opc_assignments", 00:05:50.337 "vmd_rescan", 00:05:50.337 "vmd_remove_device", 00:05:50.337 "vmd_enable", 00:05:50.337 "sock_set_default_impl", 00:05:50.337 "sock_impl_set_options", 00:05:50.337 "sock_impl_get_options", 00:05:50.337 "iobuf_get_stats", 00:05:50.337 "iobuf_set_options", 00:05:50.337 "framework_get_pci_devices", 00:05:50.337 "framework_get_config", 00:05:50.337 "framework_get_subsystems", 00:05:50.337 "vfu_tgt_set_base_path", 00:05:50.337 "trace_get_info", 00:05:50.337 "trace_get_tpoint_group_mask", 00:05:50.337 "trace_disable_tpoint_group", 00:05:50.337 "trace_enable_tpoint_group", 00:05:50.337 "trace_clear_tpoint_mask", 00:05:50.337 "trace_set_tpoint_mask", 00:05:50.337 "spdk_get_version", 00:05:50.337 "rpc_get_methods" 00:05:50.337 ] 00:05:50.337 13:16:47 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.337 13:16:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.337 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.337 13:16:47 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.337 13:16:47 -- spdkcli/tcp.sh@38 -- # killprocess 749588 00:05:50.337 13:16:47 -- common/autotest_common.sh@926 -- # '[' -z 749588 ']' 00:05:50.337 13:16:47 -- common/autotest_common.sh@930 -- # kill -0 749588 00:05:50.337 13:16:47 -- common/autotest_common.sh@931 -- # uname 00:05:50.337 13:16:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:50.337 13:16:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 749588 00:05:50.337 13:16:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:50.337 13:16:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:50.337 13:16:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 749588' 00:05:50.337 killing process with pid 749588 00:05:50.337 13:16:47 -- common/autotest_common.sh@945 -- # kill 749588 00:05:50.337 13:16:47 -- common/autotest_common.sh@950 -- # wait 749588 00:05:50.598 00:05:50.598 real 0m1.364s 00:05:50.598 user 0m2.556s 00:05:50.598 sys 0m0.398s 00:05:50.598 13:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.598 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.599 ************************************ 00:05:50.599 END TEST spdkcli_tcp 00:05:50.599 ************************************ 00:05:50.599 13:16:47 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.599 13:16:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.599 13:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.599 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.599 ************************************ 00:05:50.599 START TEST dpdk_mem_utility 00:05:50.599 ************************************ 00:05:50.599 13:16:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.599 * Looking for test storage... 00:05:50.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:50.599 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.599 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=749989 00:05:50.599 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 749989 00:05:50.599 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.599 13:16:48 -- common/autotest_common.sh@819 -- # '[' -z 749989 ']' 00:05:50.599 13:16:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.599 13:16:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.599 13:16:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.599 13:16:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.599 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:50.860 [2024-07-26 13:16:48.089884] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:50.860 [2024-07-26 13:16:48.089944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749989 ] 00:05:50.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.860 [2024-07-26 13:16:48.150006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.860 [2024-07-26 13:16:48.181443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.860 [2024-07-26 13:16:48.181585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.433 13:16:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.433 13:16:48 -- common/autotest_common.sh@852 -- # return 0 00:05:51.433 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.433 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.433 13:16:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.433 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.433 { 00:05:51.433 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.433 } 00:05:51.433 13:16:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.433 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.433 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:51.433 1 heaps totaling size 814.000000 MiB 00:05:51.433 size: 814.000000 MiB heap id: 0 00:05:51.433 end heaps---------- 00:05:51.433 8 mempools totaling size 598.116089 MiB 00:05:51.433 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.433 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.433 size: 84.521057 MiB name: bdev_io_749989 00:05:51.433 size: 51.011292 MiB name: evtpool_749989 00:05:51.433 size: 50.003479 MiB name: msgpool_749989 00:05:51.433 size: 21.763794 MiB name: PDU_Pool 00:05:51.433 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.433 size: 0.026123 MiB name: Session_Pool 00:05:51.433 end mempools------- 00:05:51.433 6 memzones totaling size 4.142822 MiB 00:05:51.433 size: 1.000366 MiB name: RG_ring_0_749989 00:05:51.433 size: 1.000366 MiB name: RG_ring_1_749989 00:05:51.433 size: 1.000366 MiB name: RG_ring_4_749989 00:05:51.433 size: 1.000366 MiB name: RG_ring_5_749989 00:05:51.433 size: 0.125366 MiB name: RG_ring_2_749989 00:05:51.433 size: 0.015991 MiB name: RG_ring_3_749989 00:05:51.433 end memzones------- 00:05:51.433 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.696 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:51.696 list of free elements. size: 12.519348 MiB 00:05:51.696 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:51.696 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:51.696 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:51.696 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:51.696 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:51.696 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:51.696 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:51.696 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:51.696 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:51.696 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:51.696 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:51.696 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:51.696 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:51.696 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:51.696 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:51.696 list of standard malloc elements. size: 199.218079 MiB 00:05:51.696 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:51.696 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:51.696 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.696 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:51.696 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:51.696 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.696 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:51.696 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.696 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:51.696 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:51.696 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:51.696 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:51.696 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:51.696 list of memzone associated elements. size: 602.262573 MiB 00:05:51.696 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:51.696 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.696 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:51.696 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.696 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:51.696 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_749989_0 00:05:51.696 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:51.696 associated memzone info: size: 48.002930 MiB name: MP_evtpool_749989_0 00:05:51.696 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:51.696 associated memzone info: size: 48.002930 MiB name: MP_msgpool_749989_0 00:05:51.696 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:51.696 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.696 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:51.696 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.696 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:51.696 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_749989 00:05:51.696 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:51.696 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_749989 00:05:51.696 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.696 associated memzone info: size: 1.007996 MiB name: MP_evtpool_749989 00:05:51.696 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:51.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.696 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:51.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.696 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:51.696 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.696 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:51.696 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.696 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:51.696 associated memzone info: size: 1.000366 MiB name: RG_ring_0_749989 00:05:51.696 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:51.696 associated memzone info: size: 1.000366 MiB name: RG_ring_1_749989 00:05:51.696 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:51.696 associated memzone info: size: 1.000366 MiB name: RG_ring_4_749989 00:05:51.696 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:51.696 associated memzone info: size: 1.000366 MiB name: RG_ring_5_749989 00:05:51.696 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:51.696 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_749989 00:05:51.696 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:51.696 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.696 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:51.696 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.696 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:51.696 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.696 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:51.696 associated memzone info: size: 0.125366 MiB name: RG_ring_2_749989 00:05:51.696 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:51.696 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.696 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:51.696 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.696 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:51.696 associated memzone info: size: 0.015991 MiB name: RG_ring_3_749989 00:05:51.696 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:51.696 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.696 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:51.696 associated memzone info: size: 0.000183 MiB name: MP_msgpool_749989 00:05:51.696 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:51.696 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_749989 00:05:51.696 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:51.696 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.696 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.696 13:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 749989 00:05:51.696 13:16:48 -- common/autotest_common.sh@926 -- # '[' -z 749989 ']' 00:05:51.696 13:16:48 -- common/autotest_common.sh@930 -- # kill -0 749989 00:05:51.696 13:16:48 -- common/autotest_common.sh@931 -- # uname 00:05:51.696 13:16:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.696 13:16:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 749989 00:05:51.696 13:16:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.696 13:16:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.696 13:16:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 749989' 00:05:51.696 killing process with pid 749989 00:05:51.696 13:16:48 -- common/autotest_common.sh@945 -- # kill 749989 00:05:51.696 13:16:48 -- common/autotest_common.sh@950 -- # wait 749989 00:05:51.958 00:05:51.958 real 0m1.227s 00:05:51.958 user 0m1.274s 00:05:51.958 sys 0m0.361s 00:05:51.958 13:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.958 13:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.958 ************************************ 00:05:51.958 END TEST dpdk_mem_utility 00:05:51.958 ************************************ 00:05:51.958 13:16:49 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.958 13:16:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.958 13:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.958 13:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.958 ************************************ 00:05:51.958 START TEST event 00:05:51.958 ************************************ 00:05:51.958 13:16:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.958 * Looking for test storage... 00:05:51.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.958 13:16:49 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:51.958 13:16:49 -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.958 13:16:49 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.958 13:16:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:51.958 13:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.958 13:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.958 ************************************ 00:05:51.958 START TEST event_perf 00:05:51.958 ************************************ 00:05:51.958 13:16:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.958 Running I/O for 1 seconds...[2024-07-26 13:16:49.321312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.958 [2024-07-26 13:16:49.321419] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750374 ] 00:05:51.958 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.958 [2024-07-26 13:16:49.388836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.958 [2024-07-26 13:16:49.425861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.958 [2024-07-26 13:16:49.425979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.958 [2024-07-26 13:16:49.426117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.958 [2024-07-26 13:16:49.426118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.345 Running I/O for 1 seconds... 00:05:53.345 lcore 0: 172225 00:05:53.345 lcore 1: 172223 00:05:53.345 lcore 2: 172225 00:05:53.346 lcore 3: 172228 00:05:53.346 done. 00:05:53.346 00:05:53.346 real 0m1.167s 00:05:53.346 user 0m4.085s 00:05:53.346 sys 0m0.081s 00:05:53.346 13:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.346 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.346 ************************************ 00:05:53.346 END TEST event_perf 00:05:53.346 ************************************ 00:05:53.346 13:16:50 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.346 13:16:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:53.346 13:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.346 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.346 ************************************ 00:05:53.346 START TEST event_reactor 00:05:53.346 ************************************ 00:05:53.346 13:16:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.346 [2024-07-26 13:16:50.528501] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.346 [2024-07-26 13:16:50.528594] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750520 ] 00:05:53.346 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.346 [2024-07-26 13:16:50.591972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.346 [2024-07-26 13:16:50.621492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.291 test_start 00:05:54.291 oneshot 00:05:54.291 tick 100 00:05:54.291 tick 100 00:05:54.291 tick 250 00:05:54.291 tick 100 00:05:54.291 tick 100 00:05:54.291 tick 100 00:05:54.291 tick 250 00:05:54.291 tick 500 00:05:54.291 tick 100 00:05:54.291 tick 100 00:05:54.291 tick 250 00:05:54.291 tick 100 00:05:54.291 tick 100 00:05:54.291 test_end 00:05:54.291 00:05:54.291 real 0m1.152s 00:05:54.291 user 0m1.078s 00:05:54.291 sys 0m0.071s 00:05:54.291 13:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.291 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.291 ************************************ 00:05:54.291 END TEST event_reactor 00:05:54.291 ************************************ 00:05:54.291 13:16:51 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.291 13:16:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:54.291 13:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.291 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.291 ************************************ 00:05:54.291 START TEST event_reactor_perf 00:05:54.291 ************************************ 00:05:54.291 13:16:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.291 [2024-07-26 13:16:51.722076] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:54.291 [2024-07-26 13:16:51.722176] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750770 ] 00:05:54.291 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.552 [2024-07-26 13:16:51.784839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.552 [2024-07-26 13:16:51.814802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.497 test_start 00:05:55.497 test_end 00:05:55.497 Performance: 363295 events per second 00:05:55.497 00:05:55.497 real 0m1.153s 00:05:55.497 user 0m1.077s 00:05:55.497 sys 0m0.071s 00:05:55.497 13:16:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.497 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.497 ************************************ 00:05:55.497 END TEST event_reactor_perf 00:05:55.497 ************************************ 00:05:55.497 13:16:52 -- event/event.sh@49 -- # uname -s 00:05:55.497 13:16:52 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.497 13:16:52 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.497 13:16:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.497 13:16:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.497 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.497 ************************************ 00:05:55.497 START TEST event_scheduler 00:05:55.497 ************************************ 00:05:55.497 13:16:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.497 * Looking for test storage... 00:05:55.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:55.758 13:16:52 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.758 13:16:52 -- scheduler/scheduler.sh@35 -- # scheduler_pid=751148 00:05:55.758 13:16:52 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.758 13:16:52 -- scheduler/scheduler.sh@37 -- # waitforlisten 751148 00:05:55.758 13:16:52 -- common/autotest_common.sh@819 -- # '[' -z 751148 ']' 00:05:55.758 13:16:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.758 13:16:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.758 13:16:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.758 13:16:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.758 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:55.758 13:16:52 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.758 [2024-07-26 13:16:53.028555] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.758 [2024-07-26 13:16:53.028634] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751148 ] 00:05:55.758 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.758 [2024-07-26 13:16:53.080997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.758 [2024-07-26 13:16:53.112479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.758 [2024-07-26 13:16:53.112640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.758 [2024-07-26 13:16:53.112999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.758 [2024-07-26 13:16:53.112999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.330 13:16:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.330 13:16:53 -- common/autotest_common.sh@852 -- # return 0 00:05:56.330 13:16:53 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.330 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.330 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.330 POWER: Env isn't set yet! 00:05:56.330 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:56.330 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.330 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.330 POWER: Attempting to initialise PSTAT power management... 00:05:56.592 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:56.592 POWER: Initialized successfully for lcore 0 power management 00:05:56.592 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:56.592 POWER: Initialized successfully for lcore 1 power management 00:05:56.592 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:56.592 POWER: Initialized successfully for lcore 2 power management 00:05:56.592 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:56.592 POWER: Initialized successfully for lcore 3 power management 00:05:56.592 [2024-07-26 13:16:53.847602] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.592 [2024-07-26 13:16:53.847610] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.592 [2024-07-26 13:16:53.847614] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 [2024-07-26 13:16:53.897687] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.592 13:16:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.592 13:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 ************************************ 00:05:56.592 START TEST scheduler_create_thread 00:05:56.592 ************************************ 00:05:56.592 13:16:53 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 2 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 3 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 4 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 5 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 6 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 7 00:05:56.592 13:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:53 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.592 13:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 8 00:05:56.592 13:16:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:54 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.592 13:16:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.592 9 00:05:56.592 13:16:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.592 13:16:54 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.592 13:16:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.592 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.980 10 00:05:57.980 13:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.980 13:16:55 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.980 13:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.980 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:59.367 13:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.367 13:16:56 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.367 13:16:56 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.367 13:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.367 13:16:56 -- common/autotest_common.sh@10 -- # set +x 00:05:59.940 13:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.940 13:16:57 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.940 13:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.940 13:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.883 13:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:00.883 13:16:58 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:00.883 13:16:58 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:00.883 13:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:00.883 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.456 13:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.456 00:06:01.456 real 0m4.797s 00:06:01.456 user 0m0.024s 00:06:01.456 sys 0m0.006s 00:06:01.456 13:16:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.456 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.456 ************************************ 00:06:01.456 END TEST scheduler_create_thread 00:06:01.456 ************************************ 00:06:01.456 13:16:58 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.456 13:16:58 -- scheduler/scheduler.sh@46 -- # killprocess 751148 00:06:01.456 13:16:58 -- common/autotest_common.sh@926 -- # '[' -z 751148 ']' 00:06:01.456 13:16:58 -- common/autotest_common.sh@930 -- # kill -0 751148 00:06:01.456 13:16:58 -- common/autotest_common.sh@931 -- # uname 00:06:01.456 13:16:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.456 13:16:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 751148 00:06:01.456 13:16:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:01.456 13:16:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:01.456 13:16:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 751148' 00:06:01.456 killing process with pid 751148 00:06:01.456 13:16:58 -- common/autotest_common.sh@945 -- # kill 751148 00:06:01.456 13:16:58 -- common/autotest_common.sh@950 -- # wait 751148 00:06:01.717 [2024-07-26 13:16:58.983458] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.717 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:01.717 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:01.717 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:01.717 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:01.717 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:01.717 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:01.717 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:01.717 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:01.717 00:06:01.717 real 0m6.235s 00:06:01.717 user 0m14.137s 00:06:01.717 sys 0m0.317s 00:06:01.717 13:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.717 13:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.717 ************************************ 00:06:01.717 END TEST event_scheduler 00:06:01.717 ************************************ 00:06:01.717 13:16:59 -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.717 13:16:59 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.717 13:16:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.717 13:16:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.717 13:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.717 ************************************ 00:06:01.717 START TEST app_repeat 00:06:01.717 ************************************ 00:06:01.717 13:16:59 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:01.717 13:16:59 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.717 13:16:59 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.717 13:16:59 -- event/event.sh@13 -- # local nbd_list 00:06:01.717 13:16:59 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.717 13:16:59 -- event/event.sh@14 -- # local bdev_list 00:06:01.717 13:16:59 -- event/event.sh@15 -- # local repeat_times=4 00:06:01.717 13:16:59 -- event/event.sh@17 -- # modprobe nbd 00:06:01.982 13:16:59 -- event/event.sh@19 -- # repeat_pid=752498 00:06:01.982 13:16:59 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.982 13:16:59 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.982 13:16:59 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 752498' 00:06:01.982 Process app_repeat pid: 752498 00:06:01.982 13:16:59 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.982 13:16:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.982 spdk_app_start Round 0 00:06:01.982 13:16:59 -- event/event.sh@25 -- # waitforlisten 752498 /var/tmp/spdk-nbd.sock 00:06:01.982 13:16:59 -- common/autotest_common.sh@819 -- # '[' -z 752498 ']' 00:06:01.982 13:16:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.982 13:16:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.982 13:16:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.982 13:16:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.982 13:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:01.982 [2024-07-26 13:16:59.221290] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.982 [2024-07-26 13:16:59.221363] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752498 ] 00:06:01.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.982 [2024-07-26 13:16:59.282854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.982 [2024-07-26 13:16:59.316551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.982 [2024-07-26 13:16:59.316554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.614 13:16:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.614 13:16:59 -- common/autotest_common.sh@852 -- # return 0 00:06:02.614 13:16:59 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.874 Malloc0 00:06:02.874 13:17:00 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.874 Malloc1 00:06:02.874 13:17:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.874 13:17:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.135 /dev/nbd0 00:06:03.135 13:17:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.135 13:17:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.135 13:17:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:03.135 13:17:00 -- common/autotest_common.sh@857 -- # local i 00:06:03.135 13:17:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.135 13:17:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.135 13:17:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:03.135 13:17:00 -- common/autotest_common.sh@861 -- # break 00:06:03.135 13:17:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.135 13:17:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.135 13:17:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.135 1+0 records in 00:06:03.135 1+0 records out 00:06:03.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208247 s, 19.7 MB/s 00:06:03.136 13:17:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.136 13:17:00 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.136 13:17:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.136 13:17:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.136 13:17:00 -- common/autotest_common.sh@877 -- # return 0 00:06:03.136 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.136 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.136 13:17:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.397 /dev/nbd1 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.397 13:17:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:03.397 13:17:00 -- common/autotest_common.sh@857 -- # local i 00:06:03.397 13:17:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.397 13:17:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.397 13:17:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:03.397 13:17:00 -- common/autotest_common.sh@861 -- # break 00:06:03.397 13:17:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.397 13:17:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.397 13:17:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.397 1+0 records in 00:06:03.397 1+0 records out 00:06:03.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278578 s, 14.7 MB/s 00:06:03.397 13:17:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.397 13:17:00 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.397 13:17:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.397 13:17:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.397 13:17:00 -- common/autotest_common.sh@877 -- # return 0 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.397 { 00:06:03.397 "nbd_device": "/dev/nbd0", 00:06:03.397 "bdev_name": "Malloc0" 00:06:03.397 }, 00:06:03.397 { 00:06:03.397 "nbd_device": "/dev/nbd1", 00:06:03.397 "bdev_name": "Malloc1" 00:06:03.397 } 00:06:03.397 ]' 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.397 { 00:06:03.397 "nbd_device": "/dev/nbd0", 00:06:03.397 "bdev_name": "Malloc0" 00:06:03.397 }, 00:06:03.397 { 00:06:03.397 "nbd_device": "/dev/nbd1", 00:06:03.397 "bdev_name": "Malloc1" 00:06:03.397 } 00:06:03.397 ]' 00:06:03.397 13:17:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.658 /dev/nbd1' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.658 /dev/nbd1' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.658 256+0 records in 00:06:03.658 256+0 records out 00:06:03.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124326 s, 84.3 MB/s 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.658 256+0 records in 00:06:03.658 256+0 records out 00:06:03.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172151 s, 60.9 MB/s 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.658 256+0 records in 00:06:03.658 256+0 records out 00:06:03.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171984 s, 61.0 MB/s 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.658 13:17:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@41 -- # break 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@41 -- # break 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.919 13:17:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@65 -- # true 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.180 13:17:01 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.180 13:17:01 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.441 13:17:01 -- event/event.sh@35 -- # sleep 3 00:06:04.441 [2024-07-26 13:17:01.807640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.441 [2024-07-26 13:17:01.834873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.441 [2024-07-26 13:17:01.834876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.441 [2024-07-26 13:17:01.866325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.441 [2024-07-26 13:17:01.866361] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.748 13:17:04 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.748 13:17:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.748 spdk_app_start Round 1 00:06:07.748 13:17:04 -- event/event.sh@25 -- # waitforlisten 752498 /var/tmp/spdk-nbd.sock 00:06:07.748 13:17:04 -- common/autotest_common.sh@819 -- # '[' -z 752498 ']' 00:06:07.748 13:17:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.748 13:17:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.748 13:17:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.748 13:17:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.748 13:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.748 13:17:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.748 13:17:04 -- common/autotest_common.sh@852 -- # return 0 00:06:07.748 13:17:04 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.748 Malloc0 00:06:07.748 13:17:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.748 Malloc1 00:06:07.748 13:17:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@12 -- # local i 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.748 13:17:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.010 /dev/nbd0 00:06:08.010 13:17:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.010 13:17:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.010 13:17:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.010 13:17:05 -- common/autotest_common.sh@857 -- # local i 00:06:08.010 13:17:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.010 13:17:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.010 13:17:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.010 13:17:05 -- common/autotest_common.sh@861 -- # break 00:06:08.010 13:17:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.010 13:17:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.010 13:17:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.010 1+0 records in 00:06:08.010 1+0 records out 00:06:08.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272338 s, 15.0 MB/s 00:06:08.010 13:17:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.010 13:17:05 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.010 13:17:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.010 13:17:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.010 13:17:05 -- common/autotest_common.sh@877 -- # return 0 00:06:08.010 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.010 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.010 13:17:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.272 /dev/nbd1 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.272 13:17:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:08.272 13:17:05 -- common/autotest_common.sh@857 -- # local i 00:06:08.272 13:17:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.272 13:17:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.272 13:17:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:08.272 13:17:05 -- common/autotest_common.sh@861 -- # break 00:06:08.272 13:17:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.272 13:17:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.272 13:17:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.272 1+0 records in 00:06:08.272 1+0 records out 00:06:08.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208208 s, 19.7 MB/s 00:06:08.272 13:17:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.272 13:17:05 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.272 13:17:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.272 13:17:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.272 13:17:05 -- common/autotest_common.sh@877 -- # return 0 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.272 { 00:06:08.272 "nbd_device": "/dev/nbd0", 00:06:08.272 "bdev_name": "Malloc0" 00:06:08.272 }, 00:06:08.272 { 00:06:08.272 "nbd_device": "/dev/nbd1", 00:06:08.272 "bdev_name": "Malloc1" 00:06:08.272 } 00:06:08.272 ]' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.272 { 00:06:08.272 "nbd_device": "/dev/nbd0", 00:06:08.272 "bdev_name": "Malloc0" 00:06:08.272 }, 00:06:08.272 { 00:06:08.272 "nbd_device": "/dev/nbd1", 00:06:08.272 "bdev_name": "Malloc1" 00:06:08.272 } 00:06:08.272 ]' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.272 /dev/nbd1' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.272 /dev/nbd1' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.272 13:17:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.534 256+0 records in 00:06:08.534 256+0 records out 00:06:08.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115223 s, 91.0 MB/s 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.534 256+0 records in 00:06:08.534 256+0 records out 00:06:08.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160969 s, 65.1 MB/s 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.534 256+0 records in 00:06:08.534 256+0 records out 00:06:08.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166974 s, 62.8 MB/s 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@41 -- # break 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.534 13:17:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@41 -- # break 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.796 13:17:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@65 -- # true 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.058 13:17:06 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.058 13:17:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.058 13:17:06 -- event/event.sh@35 -- # sleep 3 00:06:09.320 [2024-07-26 13:17:06.641601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.320 [2024-07-26 13:17:06.668940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.320 [2024-07-26 13:17:06.668943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.320 [2024-07-26 13:17:06.700503] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.320 [2024-07-26 13:17:06.700537] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.624 13:17:09 -- event/event.sh@23 -- # for i in {0..2} 00:06:12.624 13:17:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:12.624 spdk_app_start Round 2 00:06:12.624 13:17:09 -- event/event.sh@25 -- # waitforlisten 752498 /var/tmp/spdk-nbd.sock 00:06:12.624 13:17:09 -- common/autotest_common.sh@819 -- # '[' -z 752498 ']' 00:06:12.624 13:17:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.624 13:17:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.624 13:17:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.624 13:17:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.624 13:17:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.624 13:17:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.624 13:17:09 -- common/autotest_common.sh@852 -- # return 0 00:06:12.624 13:17:09 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.624 Malloc0 00:06:12.624 13:17:09 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.624 Malloc1 00:06:12.624 13:17:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@12 -- # local i 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.624 13:17:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.885 /dev/nbd0 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.885 13:17:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.885 13:17:10 -- common/autotest_common.sh@857 -- # local i 00:06:12.885 13:17:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.885 13:17:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.885 13:17:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.885 13:17:10 -- common/autotest_common.sh@861 -- # break 00:06:12.885 13:17:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.885 13:17:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.885 13:17:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.885 1+0 records in 00:06:12.885 1+0 records out 00:06:12.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201054 s, 20.4 MB/s 00:06:12.885 13:17:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.885 13:17:10 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.885 13:17:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.885 13:17:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.885 13:17:10 -- common/autotest_common.sh@877 -- # return 0 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.885 /dev/nbd1 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.885 13:17:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.885 13:17:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.885 13:17:10 -- common/autotest_common.sh@857 -- # local i 00:06:12.885 13:17:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.885 13:17:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.886 13:17:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:13.159 13:17:10 -- common/autotest_common.sh@861 -- # break 00:06:13.159 13:17:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:13.159 13:17:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:13.159 13:17:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.159 1+0 records in 00:06:13.159 1+0 records out 00:06:13.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283316 s, 14.5 MB/s 00:06:13.159 13:17:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.159 13:17:10 -- common/autotest_common.sh@874 -- # size=4096 00:06:13.159 13:17:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.159 13:17:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:13.159 13:17:10 -- common/autotest_common.sh@877 -- # return 0 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.159 { 00:06:13.159 "nbd_device": "/dev/nbd0", 00:06:13.159 "bdev_name": "Malloc0" 00:06:13.159 }, 00:06:13.159 { 00:06:13.159 "nbd_device": "/dev/nbd1", 00:06:13.159 "bdev_name": "Malloc1" 00:06:13.159 } 00:06:13.159 ]' 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.159 { 00:06:13.159 "nbd_device": "/dev/nbd0", 00:06:13.159 "bdev_name": "Malloc0" 00:06:13.159 }, 00:06:13.159 { 00:06:13.159 "nbd_device": "/dev/nbd1", 00:06:13.159 "bdev_name": "Malloc1" 00:06:13.159 } 00:06:13.159 ]' 00:06:13.159 13:17:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.160 /dev/nbd1' 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.160 /dev/nbd1' 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.160 256+0 records in 00:06:13.160 256+0 records out 00:06:13.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121726 s, 86.1 MB/s 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.160 256+0 records in 00:06:13.160 256+0 records out 00:06:13.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166452 s, 63.0 MB/s 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.160 13:17:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.422 256+0 records in 00:06:13.422 256+0 records out 00:06:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174365 s, 60.1 MB/s 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@51 -- # local i 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@41 -- # break 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.422 13:17:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@41 -- # break 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.684 13:17:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@65 -- # true 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.945 13:17:11 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.945 13:17:11 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.945 13:17:11 -- event/event.sh@35 -- # sleep 3 00:06:14.206 [2024-07-26 13:17:11.490414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.206 [2024-07-26 13:17:11.517545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.206 [2024-07-26 13:17:11.517547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.206 [2024-07-26 13:17:11.548791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.206 [2024-07-26 13:17:11.548826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.511 13:17:14 -- event/event.sh@38 -- # waitforlisten 752498 /var/tmp/spdk-nbd.sock 00:06:17.511 13:17:14 -- common/autotest_common.sh@819 -- # '[' -z 752498 ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.511 13:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.511 13:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.511 13:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.511 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.511 13:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.511 13:17:14 -- common/autotest_common.sh@852 -- # return 0 00:06:17.511 13:17:14 -- event/event.sh@39 -- # killprocess 752498 00:06:17.511 13:17:14 -- common/autotest_common.sh@926 -- # '[' -z 752498 ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@930 -- # kill -0 752498 00:06:17.511 13:17:14 -- common/autotest_common.sh@931 -- # uname 00:06:17.511 13:17:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 752498 00:06:17.511 13:17:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.511 13:17:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 752498' 00:06:17.511 killing process with pid 752498 00:06:17.511 13:17:14 -- common/autotest_common.sh@945 -- # kill 752498 00:06:17.511 13:17:14 -- common/autotest_common.sh@950 -- # wait 752498 00:06:17.511 spdk_app_start is called in Round 0. 00:06:17.511 Shutdown signal received, stop current app iteration 00:06:17.511 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:17.511 spdk_app_start is called in Round 1. 00:06:17.511 Shutdown signal received, stop current app iteration 00:06:17.511 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:17.511 spdk_app_start is called in Round 2. 00:06:17.511 Shutdown signal received, stop current app iteration 00:06:17.511 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:17.511 spdk_app_start is called in Round 3. 00:06:17.511 Shutdown signal received, stop current app iteration 00:06:17.511 13:17:14 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.511 13:17:14 -- event/event.sh@42 -- # return 0 00:06:17.511 00:06:17.511 real 0m15.498s 00:06:17.511 user 0m33.548s 00:06:17.511 sys 0m2.087s 00:06:17.511 13:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.511 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.511 ************************************ 00:06:17.511 END TEST app_repeat 00:06:17.511 ************************************ 00:06:17.511 13:17:14 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.511 13:17:14 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.511 13:17:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.511 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.511 ************************************ 00:06:17.511 START TEST cpu_locks 00:06:17.511 ************************************ 00:06:17.511 13:17:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.511 * Looking for test storage... 00:06:17.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.511 13:17:14 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.511 13:17:14 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.511 13:17:14 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.511 13:17:14 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.511 13:17:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.511 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.511 ************************************ 00:06:17.511 START TEST default_locks 00:06:17.511 ************************************ 00:06:17.511 13:17:14 -- common/autotest_common.sh@1104 -- # default_locks 00:06:17.511 13:17:14 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=755835 00:06:17.511 13:17:14 -- event/cpu_locks.sh@47 -- # waitforlisten 755835 00:06:17.511 13:17:14 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.511 13:17:14 -- common/autotest_common.sh@819 -- # '[' -z 755835 ']' 00:06:17.511 13:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.511 13:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.511 13:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.511 13:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.512 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.512 [2024-07-26 13:17:14.887737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:17.512 [2024-07-26 13:17:14.887808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755835 ] 00:06:17.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.512 [2024-07-26 13:17:14.953758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.772 [2024-07-26 13:17:14.990133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.772 [2024-07-26 13:17:14.990310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.344 13:17:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.344 13:17:15 -- common/autotest_common.sh@852 -- # return 0 00:06:18.344 13:17:15 -- event/cpu_locks.sh@49 -- # locks_exist 755835 00:06:18.344 13:17:15 -- event/cpu_locks.sh@22 -- # lslocks -p 755835 00:06:18.344 13:17:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.917 lslocks: write error 00:06:18.917 13:17:16 -- event/cpu_locks.sh@50 -- # killprocess 755835 00:06:18.917 13:17:16 -- common/autotest_common.sh@926 -- # '[' -z 755835 ']' 00:06:18.917 13:17:16 -- common/autotest_common.sh@930 -- # kill -0 755835 00:06:18.917 13:17:16 -- common/autotest_common.sh@931 -- # uname 00:06:18.917 13:17:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.917 13:17:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 755835 00:06:18.917 13:17:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.917 13:17:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.917 13:17:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 755835' 00:06:18.917 killing process with pid 755835 00:06:18.917 13:17:16 -- common/autotest_common.sh@945 -- # kill 755835 00:06:18.917 13:17:16 -- common/autotest_common.sh@950 -- # wait 755835 00:06:19.178 13:17:16 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 755835 00:06:19.178 13:17:16 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.178 13:17:16 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 755835 00:06:19.178 13:17:16 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:19.178 13:17:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.178 13:17:16 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:19.178 13:17:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.178 13:17:16 -- common/autotest_common.sh@643 -- # waitforlisten 755835 00:06:19.178 13:17:16 -- common/autotest_common.sh@819 -- # '[' -z 755835 ']' 00:06:19.178 13:17:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.178 13:17:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.178 13:17:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.178 13:17:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.178 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (755835) - No such process 00:06:19.178 ERROR: process (pid: 755835) is no longer running 00:06:19.178 13:17:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.178 13:17:16 -- common/autotest_common.sh@852 -- # return 1 00:06:19.178 13:17:16 -- common/autotest_common.sh@643 -- # es=1 00:06:19.178 13:17:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:19.178 13:17:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:19.178 13:17:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:19.178 13:17:16 -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.178 13:17:16 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.178 13:17:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.178 13:17:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.178 00:06:19.178 real 0m1.656s 00:06:19.178 user 0m1.754s 00:06:19.178 sys 0m0.577s 00:06:19.178 13:17:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.178 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.178 ************************************ 00:06:19.178 END TEST default_locks 00:06:19.178 ************************************ 00:06:19.178 13:17:16 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.178 13:17:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.178 13:17:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.178 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.178 ************************************ 00:06:19.178 START TEST default_locks_via_rpc 00:06:19.178 ************************************ 00:06:19.178 13:17:16 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:19.178 13:17:16 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=756202 00:06:19.178 13:17:16 -- event/cpu_locks.sh@63 -- # waitforlisten 756202 00:06:19.178 13:17:16 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.178 13:17:16 -- common/autotest_common.sh@819 -- # '[' -z 756202 ']' 00:06:19.178 13:17:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.179 13:17:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.179 13:17:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.179 13:17:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.179 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:19.179 [2024-07-26 13:17:16.585451] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:19.179 [2024-07-26 13:17:16.585505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756202 ] 00:06:19.179 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.179 [2024-07-26 13:17:16.644830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.439 [2024-07-26 13:17:16.672013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.439 [2024-07-26 13:17:16.672146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.022 13:17:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.022 13:17:17 -- common/autotest_common.sh@852 -- # return 0 00:06:20.022 13:17:17 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:20.022 13:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.022 13:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.022 13:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.022 13:17:17 -- event/cpu_locks.sh@67 -- # no_locks 00:06:20.022 13:17:17 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.022 13:17:17 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.022 13:17:17 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.022 13:17:17 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.022 13:17:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.022 13:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.022 13:17:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.022 13:17:17 -- event/cpu_locks.sh@71 -- # locks_exist 756202 00:06:20.022 13:17:17 -- event/cpu_locks.sh@22 -- # lslocks -p 756202 00:06:20.022 13:17:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.325 13:17:17 -- event/cpu_locks.sh@73 -- # killprocess 756202 00:06:20.325 13:17:17 -- common/autotest_common.sh@926 -- # '[' -z 756202 ']' 00:06:20.325 13:17:17 -- common/autotest_common.sh@930 -- # kill -0 756202 00:06:20.325 13:17:17 -- common/autotest_common.sh@931 -- # uname 00:06:20.325 13:17:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.325 13:17:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 756202 00:06:20.325 13:17:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.325 13:17:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.325 13:17:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 756202' 00:06:20.325 killing process with pid 756202 00:06:20.325 13:17:17 -- common/autotest_common.sh@945 -- # kill 756202 00:06:20.325 13:17:17 -- common/autotest_common.sh@950 -- # wait 756202 00:06:20.586 00:06:20.586 real 0m1.388s 00:06:20.586 user 0m1.480s 00:06:20.586 sys 0m0.459s 00:06:20.586 13:17:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.586 13:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.586 ************************************ 00:06:20.586 END TEST default_locks_via_rpc 00:06:20.586 ************************************ 00:06:20.586 13:17:17 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.586 13:17:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.586 13:17:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.586 13:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.586 ************************************ 00:06:20.586 START TEST non_locking_app_on_locked_coremask 00:06:20.586 ************************************ 00:06:20.586 13:17:17 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:20.586 13:17:17 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=756569 00:06:20.586 13:17:17 -- event/cpu_locks.sh@81 -- # waitforlisten 756569 /var/tmp/spdk.sock 00:06:20.586 13:17:17 -- common/autotest_common.sh@819 -- # '[' -z 756569 ']' 00:06:20.586 13:17:17 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.586 13:17:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.586 13:17:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.586 13:17:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.586 13:17:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.586 13:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.586 [2024-07-26 13:17:18.029250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:20.586 [2024-07-26 13:17:18.029315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756569 ] 00:06:20.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.848 [2024-07-26 13:17:18.091462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.848 [2024-07-26 13:17:18.123652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.848 [2024-07-26 13:17:18.123805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.420 13:17:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.420 13:17:18 -- common/autotest_common.sh@852 -- # return 0 00:06:21.420 13:17:18 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=756614 00:06:21.420 13:17:18 -- event/cpu_locks.sh@85 -- # waitforlisten 756614 /var/tmp/spdk2.sock 00:06:21.420 13:17:18 -- common/autotest_common.sh@819 -- # '[' -z 756614 ']' 00:06:21.420 13:17:18 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.420 13:17:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.420 13:17:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.420 13:17:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.420 13:17:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.420 13:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.420 [2024-07-26 13:17:18.817867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:21.420 [2024-07-26 13:17:18.817919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756614 ] 00:06:21.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.681 [2024-07-26 13:17:18.906974] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.681 [2024-07-26 13:17:18.907003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.681 [2024-07-26 13:17:18.963873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.681 [2024-07-26 13:17:18.964002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.253 13:17:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.253 13:17:19 -- common/autotest_common.sh@852 -- # return 0 00:06:22.253 13:17:19 -- event/cpu_locks.sh@87 -- # locks_exist 756569 00:06:22.253 13:17:19 -- event/cpu_locks.sh@22 -- # lslocks -p 756569 00:06:22.253 13:17:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.826 lslocks: write error 00:06:22.826 13:17:20 -- event/cpu_locks.sh@89 -- # killprocess 756569 00:06:22.826 13:17:20 -- common/autotest_common.sh@926 -- # '[' -z 756569 ']' 00:06:22.826 13:17:20 -- common/autotest_common.sh@930 -- # kill -0 756569 00:06:22.826 13:17:20 -- common/autotest_common.sh@931 -- # uname 00:06:22.826 13:17:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.826 13:17:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 756569 00:06:22.826 13:17:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.826 13:17:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.826 13:17:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 756569' 00:06:22.826 killing process with pid 756569 00:06:22.826 13:17:20 -- common/autotest_common.sh@945 -- # kill 756569 00:06:22.826 13:17:20 -- common/autotest_common.sh@950 -- # wait 756569 00:06:23.399 13:17:20 -- event/cpu_locks.sh@90 -- # killprocess 756614 00:06:23.399 13:17:20 -- common/autotest_common.sh@926 -- # '[' -z 756614 ']' 00:06:23.399 13:17:20 -- common/autotest_common.sh@930 -- # kill -0 756614 00:06:23.399 13:17:20 -- common/autotest_common.sh@931 -- # uname 00:06:23.399 13:17:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.399 13:17:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 756614 00:06:23.399 13:17:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.399 13:17:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.399 13:17:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 756614' 00:06:23.399 killing process with pid 756614 00:06:23.399 13:17:20 -- common/autotest_common.sh@945 -- # kill 756614 00:06:23.399 13:17:20 -- common/autotest_common.sh@950 -- # wait 756614 00:06:23.399 00:06:23.399 real 0m2.843s 00:06:23.399 user 0m3.079s 00:06:23.399 sys 0m0.876s 00:06:23.399 13:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.399 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.399 ************************************ 00:06:23.399 END TEST non_locking_app_on_locked_coremask 00:06:23.399 ************************************ 00:06:23.399 13:17:20 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.399 13:17:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.399 13:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.399 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.399 ************************************ 00:06:23.399 START TEST locking_app_on_unlocked_coremask 00:06:23.399 ************************************ 00:06:23.399 13:17:20 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:23.399 13:17:20 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=757205 00:06:23.399 13:17:20 -- event/cpu_locks.sh@99 -- # waitforlisten 757205 /var/tmp/spdk.sock 00:06:23.399 13:17:20 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.399 13:17:20 -- common/autotest_common.sh@819 -- # '[' -z 757205 ']' 00:06:23.399 13:17:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.399 13:17:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.399 13:17:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.399 13:17:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.399 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.660 [2024-07-26 13:17:20.908322] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:23.660 [2024-07-26 13:17:20.908392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757205 ] 00:06:23.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.660 [2024-07-26 13:17:20.968373] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.660 [2024-07-26 13:17:20.968404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.660 [2024-07-26 13:17:21.000698] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.660 [2024-07-26 13:17:21.000844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.232 13:17:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.232 13:17:21 -- common/autotest_common.sh@852 -- # return 0 00:06:24.232 13:17:21 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.232 13:17:21 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=757297 00:06:24.232 13:17:21 -- event/cpu_locks.sh@103 -- # waitforlisten 757297 /var/tmp/spdk2.sock 00:06:24.232 13:17:21 -- common/autotest_common.sh@819 -- # '[' -z 757297 ']' 00:06:24.232 13:17:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.232 13:17:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.232 13:17:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.232 13:17:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.232 13:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:24.232 [2024-07-26 13:17:21.689422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:24.232 [2024-07-26 13:17:21.689470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757297 ] 00:06:24.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.493 [2024-07-26 13:17:21.776088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.493 [2024-07-26 13:17:21.833029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.493 [2024-07-26 13:17:21.833162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.065 13:17:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.065 13:17:22 -- common/autotest_common.sh@852 -- # return 0 00:06:25.065 13:17:22 -- event/cpu_locks.sh@105 -- # locks_exist 757297 00:06:25.065 13:17:22 -- event/cpu_locks.sh@22 -- # lslocks -p 757297 00:06:25.065 13:17:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.636 lslocks: write error 00:06:25.636 13:17:22 -- event/cpu_locks.sh@107 -- # killprocess 757205 00:06:25.636 13:17:22 -- common/autotest_common.sh@926 -- # '[' -z 757205 ']' 00:06:25.636 13:17:22 -- common/autotest_common.sh@930 -- # kill -0 757205 00:06:25.636 13:17:22 -- common/autotest_common.sh@931 -- # uname 00:06:25.636 13:17:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.636 13:17:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 757205 00:06:25.636 13:17:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.636 13:17:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.636 13:17:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 757205' 00:06:25.636 killing process with pid 757205 00:06:25.636 13:17:23 -- common/autotest_common.sh@945 -- # kill 757205 00:06:25.636 13:17:23 -- common/autotest_common.sh@950 -- # wait 757205 00:06:26.209 13:17:23 -- event/cpu_locks.sh@108 -- # killprocess 757297 00:06:26.209 13:17:23 -- common/autotest_common.sh@926 -- # '[' -z 757297 ']' 00:06:26.209 13:17:23 -- common/autotest_common.sh@930 -- # kill -0 757297 00:06:26.209 13:17:23 -- common/autotest_common.sh@931 -- # uname 00:06:26.209 13:17:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.209 13:17:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 757297 00:06:26.209 13:17:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.209 13:17:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.209 13:17:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 757297' 00:06:26.209 killing process with pid 757297 00:06:26.209 13:17:23 -- common/autotest_common.sh@945 -- # kill 757297 00:06:26.209 13:17:23 -- common/autotest_common.sh@950 -- # wait 757297 00:06:26.209 00:06:26.209 real 0m2.786s 00:06:26.209 user 0m3.037s 00:06:26.209 sys 0m0.823s 00:06:26.209 13:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.209 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:26.209 ************************************ 00:06:26.209 END TEST locking_app_on_unlocked_coremask 00:06:26.209 ************************************ 00:06:26.209 13:17:23 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.209 13:17:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.209 13:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.209 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:26.471 ************************************ 00:06:26.471 START TEST locking_app_on_locked_coremask 00:06:26.471 ************************************ 00:06:26.471 13:17:23 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:26.471 13:17:23 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=757673 00:06:26.471 13:17:23 -- event/cpu_locks.sh@116 -- # waitforlisten 757673 /var/tmp/spdk.sock 00:06:26.471 13:17:23 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.471 13:17:23 -- common/autotest_common.sh@819 -- # '[' -z 757673 ']' 00:06:26.471 13:17:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.471 13:17:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.471 13:17:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.471 13:17:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.471 13:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:26.471 [2024-07-26 13:17:23.740077] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.471 [2024-07-26 13:17:23.740136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757673 ] 00:06:26.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.471 [2024-07-26 13:17:23.801574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.471 [2024-07-26 13:17:23.831840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.471 [2024-07-26 13:17:23.831973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.044 13:17:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.044 13:17:24 -- common/autotest_common.sh@852 -- # return 0 00:06:27.044 13:17:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=758009 00:06:27.044 13:17:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 758009 /var/tmp/spdk2.sock 00:06:27.044 13:17:24 -- common/autotest_common.sh@640 -- # local es=0 00:06:27.044 13:17:24 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.044 13:17:24 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 758009 /var/tmp/spdk2.sock 00:06:27.044 13:17:24 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:27.044 13:17:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.044 13:17:24 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:27.044 13:17:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:27.044 13:17:24 -- common/autotest_common.sh@643 -- # waitforlisten 758009 /var/tmp/spdk2.sock 00:06:27.044 13:17:24 -- common/autotest_common.sh@819 -- # '[' -z 758009 ']' 00:06:27.044 13:17:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.044 13:17:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.044 13:17:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.044 13:17:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.044 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.305 [2024-07-26 13:17:24.563932] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:27.305 [2024-07-26 13:17:24.563985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758009 ] 00:06:27.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.305 [2024-07-26 13:17:24.654158] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 757673 has claimed it. 00:06:27.305 [2024-07-26 13:17:24.654199] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (758009) - No such process 00:06:27.981 ERROR: process (pid: 758009) is no longer running 00:06:27.981 13:17:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.981 13:17:25 -- common/autotest_common.sh@852 -- # return 1 00:06:27.981 13:17:25 -- common/autotest_common.sh@643 -- # es=1 00:06:27.981 13:17:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:27.981 13:17:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:27.981 13:17:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:27.981 13:17:25 -- event/cpu_locks.sh@122 -- # locks_exist 757673 00:06:27.981 13:17:25 -- event/cpu_locks.sh@22 -- # lslocks -p 757673 00:06:27.981 13:17:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.242 lslocks: write error 00:06:28.242 13:17:25 -- event/cpu_locks.sh@124 -- # killprocess 757673 00:06:28.242 13:17:25 -- common/autotest_common.sh@926 -- # '[' -z 757673 ']' 00:06:28.242 13:17:25 -- common/autotest_common.sh@930 -- # kill -0 757673 00:06:28.242 13:17:25 -- common/autotest_common.sh@931 -- # uname 00:06:28.242 13:17:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.242 13:17:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 757673 00:06:28.242 13:17:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.242 13:17:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.242 13:17:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 757673' 00:06:28.242 killing process with pid 757673 00:06:28.242 13:17:25 -- common/autotest_common.sh@945 -- # kill 757673 00:06:28.242 13:17:25 -- common/autotest_common.sh@950 -- # wait 757673 00:06:28.505 00:06:28.505 real 0m2.210s 00:06:28.505 user 0m2.474s 00:06:28.505 sys 0m0.598s 00:06:28.505 13:17:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.505 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:28.505 ************************************ 00:06:28.505 END TEST locking_app_on_locked_coremask 00:06:28.505 ************************************ 00:06:28.505 13:17:25 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:28.505 13:17:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.505 13:17:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.505 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:28.505 ************************************ 00:06:28.505 START TEST locking_overlapped_coremask 00:06:28.505 ************************************ 00:06:28.505 13:17:25 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:28.505 13:17:25 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=758321 00:06:28.505 13:17:25 -- event/cpu_locks.sh@133 -- # waitforlisten 758321 /var/tmp/spdk.sock 00:06:28.506 13:17:25 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:28.506 13:17:25 -- common/autotest_common.sh@819 -- # '[' -z 758321 ']' 00:06:28.506 13:17:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.506 13:17:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.506 13:17:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.506 13:17:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.506 13:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:28.767 [2024-07-26 13:17:25.994473] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:28.767 [2024-07-26 13:17:25.994538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758321 ] 00:06:28.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.767 [2024-07-26 13:17:26.054961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.767 [2024-07-26 13:17:26.087754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.767 [2024-07-26 13:17:26.088013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.767 [2024-07-26 13:17:26.088135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.767 [2024-07-26 13:17:26.088138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.338 13:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.338 13:17:26 -- common/autotest_common.sh@852 -- # return 0 00:06:29.338 13:17:26 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=758391 00:06:29.338 13:17:26 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 758391 /var/tmp/spdk2.sock 00:06:29.338 13:17:26 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.338 13:17:26 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.338 13:17:26 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 758391 /var/tmp/spdk2.sock 00:06:29.338 13:17:26 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:29.338 13:17:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.338 13:17:26 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:29.338 13:17:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.338 13:17:26 -- common/autotest_common.sh@643 -- # waitforlisten 758391 /var/tmp/spdk2.sock 00:06:29.338 13:17:26 -- common/autotest_common.sh@819 -- # '[' -z 758391 ']' 00:06:29.338 13:17:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.338 13:17:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.338 13:17:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.339 13:17:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.339 13:17:26 -- common/autotest_common.sh@10 -- # set +x 00:06:29.339 [2024-07-26 13:17:26.798285] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:29.339 [2024-07-26 13:17:26.798343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758391 ] 00:06:29.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.599 [2024-07-26 13:17:26.876009] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 758321 has claimed it. 00:06:29.599 [2024-07-26 13:17:26.876039] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (758391) - No such process 00:06:30.172 ERROR: process (pid: 758391) is no longer running 00:06:30.172 13:17:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.172 13:17:27 -- common/autotest_common.sh@852 -- # return 1 00:06:30.172 13:17:27 -- common/autotest_common.sh@643 -- # es=1 00:06:30.172 13:17:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:30.172 13:17:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:30.172 13:17:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:30.172 13:17:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.172 13:17:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.172 13:17:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.172 13:17:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.172 13:17:27 -- event/cpu_locks.sh@141 -- # killprocess 758321 00:06:30.172 13:17:27 -- common/autotest_common.sh@926 -- # '[' -z 758321 ']' 00:06:30.172 13:17:27 -- common/autotest_common.sh@930 -- # kill -0 758321 00:06:30.172 13:17:27 -- common/autotest_common.sh@931 -- # uname 00:06:30.172 13:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.172 13:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 758321 00:06:30.172 13:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.172 13:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.172 13:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 758321' 00:06:30.172 killing process with pid 758321 00:06:30.172 13:17:27 -- common/autotest_common.sh@945 -- # kill 758321 00:06:30.172 13:17:27 -- common/autotest_common.sh@950 -- # wait 758321 00:06:30.172 00:06:30.172 real 0m1.696s 00:06:30.172 user 0m4.886s 00:06:30.172 sys 0m0.345s 00:06:30.172 13:17:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.172 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.172 ************************************ 00:06:30.172 END TEST locking_overlapped_coremask 00:06:30.172 ************************************ 00:06:30.434 13:17:27 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.434 13:17:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.434 13:17:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.434 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.434 ************************************ 00:06:30.434 START TEST locking_overlapped_coremask_via_rpc 00:06:30.434 ************************************ 00:06:30.434 13:17:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:30.434 13:17:27 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=758725 00:06:30.434 13:17:27 -- event/cpu_locks.sh@149 -- # waitforlisten 758725 /var/tmp/spdk.sock 00:06:30.434 13:17:27 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.434 13:17:27 -- common/autotest_common.sh@819 -- # '[' -z 758725 ']' 00:06:30.434 13:17:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.434 13:17:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.434 13:17:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.434 13:17:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.434 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.434 [2024-07-26 13:17:27.737467] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:30.434 [2024-07-26 13:17:27.737528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758725 ] 00:06:30.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.434 [2024-07-26 13:17:27.796924] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.434 [2024-07-26 13:17:27.796953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.434 [2024-07-26 13:17:27.828879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.434 [2024-07-26 13:17:27.829132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.434 [2024-07-26 13:17:27.829272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.434 [2024-07-26 13:17:27.829441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.378 13:17:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.378 13:17:28 -- common/autotest_common.sh@852 -- # return 0 00:06:31.378 13:17:28 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=758765 00:06:31.378 13:17:28 -- event/cpu_locks.sh@153 -- # waitforlisten 758765 /var/tmp/spdk2.sock 00:06:31.378 13:17:28 -- common/autotest_common.sh@819 -- # '[' -z 758765 ']' 00:06:31.378 13:17:28 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.378 13:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.378 13:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.378 13:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.378 13:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.378 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:31.378 [2024-07-26 13:17:28.551288] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.378 [2024-07-26 13:17:28.551337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758765 ] 00:06:31.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.378 [2024-07-26 13:17:28.620872] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.378 [2024-07-26 13:17:28.620893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.378 [2024-07-26 13:17:28.677329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.378 [2024-07-26 13:17:28.677563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.378 [2024-07-26 13:17:28.681320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.378 [2024-07-26 13:17:28.681323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.950 13:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.950 13:17:29 -- common/autotest_common.sh@852 -- # return 0 00:06:31.950 13:17:29 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.950 13:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.950 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.950 13:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.950 13:17:29 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.950 13:17:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:31.950 13:17:29 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.950 13:17:29 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:31.950 13:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.950 13:17:29 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:31.950 13:17:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.950 13:17:29 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.950 13:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.950 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:31.950 [2024-07-26 13:17:29.321260] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 758725 has claimed it. 00:06:31.950 request: 00:06:31.950 { 00:06:31.950 "method": "framework_enable_cpumask_locks", 00:06:31.950 "req_id": 1 00:06:31.950 } 00:06:31.950 Got JSON-RPC error response 00:06:31.951 response: 00:06:31.951 { 00:06:31.951 "code": -32603, 00:06:31.951 "message": "Failed to claim CPU core: 2" 00:06:31.951 } 00:06:31.951 13:17:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:31.951 13:17:29 -- common/autotest_common.sh@643 -- # es=1 00:06:31.951 13:17:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:31.951 13:17:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:31.951 13:17:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:31.951 13:17:29 -- event/cpu_locks.sh@158 -- # waitforlisten 758725 /var/tmp/spdk.sock 00:06:31.951 13:17:29 -- common/autotest_common.sh@819 -- # '[' -z 758725 ']' 00:06:31.951 13:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.951 13:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.951 13:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.951 13:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.951 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.213 13:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.213 13:17:29 -- common/autotest_common.sh@852 -- # return 0 00:06:32.213 13:17:29 -- event/cpu_locks.sh@159 -- # waitforlisten 758765 /var/tmp/spdk2.sock 00:06:32.213 13:17:29 -- common/autotest_common.sh@819 -- # '[' -z 758765 ']' 00:06:32.213 13:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.213 13:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.213 13:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.213 13:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.213 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.213 13:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.213 13:17:29 -- common/autotest_common.sh@852 -- # return 0 00:06:32.213 13:17:29 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.213 13:17:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.213 13:17:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.213 13:17:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.213 00:06:32.213 real 0m1.963s 00:06:32.213 user 0m0.742s 00:06:32.213 sys 0m0.147s 00:06:32.213 13:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.213 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.213 ************************************ 00:06:32.213 END TEST locking_overlapped_coremask_via_rpc 00:06:32.213 ************************************ 00:06:32.213 13:17:29 -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.213 13:17:29 -- event/cpu_locks.sh@15 -- # [[ -z 758725 ]] 00:06:32.213 13:17:29 -- event/cpu_locks.sh@15 -- # killprocess 758725 00:06:32.213 13:17:29 -- common/autotest_common.sh@926 -- # '[' -z 758725 ']' 00:06:32.213 13:17:29 -- common/autotest_common.sh@930 -- # kill -0 758725 00:06:32.213 13:17:29 -- common/autotest_common.sh@931 -- # uname 00:06:32.475 13:17:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.475 13:17:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 758725 00:06:32.475 13:17:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.475 13:17:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.475 13:17:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 758725' 00:06:32.475 killing process with pid 758725 00:06:32.475 13:17:29 -- common/autotest_common.sh@945 -- # kill 758725 00:06:32.475 13:17:29 -- common/autotest_common.sh@950 -- # wait 758725 00:06:32.475 13:17:29 -- event/cpu_locks.sh@16 -- # [[ -z 758765 ]] 00:06:32.475 13:17:29 -- event/cpu_locks.sh@16 -- # killprocess 758765 00:06:32.475 13:17:29 -- common/autotest_common.sh@926 -- # '[' -z 758765 ']' 00:06:32.475 13:17:29 -- common/autotest_common.sh@930 -- # kill -0 758765 00:06:32.475 13:17:29 -- common/autotest_common.sh@931 -- # uname 00:06:32.737 13:17:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.737 13:17:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 758765 00:06:32.737 13:17:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:32.737 13:17:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:32.737 13:17:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 758765' 00:06:32.737 killing process with pid 758765 00:06:32.737 13:17:29 -- common/autotest_common.sh@945 -- # kill 758765 00:06:32.737 13:17:29 -- common/autotest_common.sh@950 -- # wait 758765 00:06:32.737 13:17:30 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.737 13:17:30 -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.737 13:17:30 -- event/cpu_locks.sh@15 -- # [[ -z 758725 ]] 00:06:32.737 13:17:30 -- event/cpu_locks.sh@15 -- # killprocess 758725 00:06:32.737 13:17:30 -- common/autotest_common.sh@926 -- # '[' -z 758725 ']' 00:06:32.737 13:17:30 -- common/autotest_common.sh@930 -- # kill -0 758725 00:06:32.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (758725) - No such process 00:06:32.737 13:17:30 -- common/autotest_common.sh@953 -- # echo 'Process with pid 758725 is not found' 00:06:32.737 Process with pid 758725 is not found 00:06:32.737 13:17:30 -- event/cpu_locks.sh@16 -- # [[ -z 758765 ]] 00:06:32.737 13:17:30 -- event/cpu_locks.sh@16 -- # killprocess 758765 00:06:32.737 13:17:30 -- common/autotest_common.sh@926 -- # '[' -z 758765 ']' 00:06:32.737 13:17:30 -- common/autotest_common.sh@930 -- # kill -0 758765 00:06:32.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (758765) - No such process 00:06:32.737 13:17:30 -- common/autotest_common.sh@953 -- # echo 'Process with pid 758765 is not found' 00:06:32.737 Process with pid 758765 is not found 00:06:32.737 13:17:30 -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.737 00:06:32.737 real 0m15.460s 00:06:32.737 user 0m26.895s 00:06:32.737 sys 0m4.583s 00:06:32.737 13:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.737 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.737 ************************************ 00:06:32.737 END TEST cpu_locks 00:06:32.737 ************************************ 00:06:33.000 00:06:33.000 real 0m41.011s 00:06:33.000 user 1m20.928s 00:06:33.000 sys 0m7.493s 00:06:33.000 13:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.000 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 END TEST event 00:06:33.000 ************************************ 00:06:33.000 13:17:30 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.000 13:17:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.000 13:17:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.000 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 START TEST thread 00:06:33.000 ************************************ 00:06:33.000 13:17:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.000 * Looking for test storage... 00:06:33.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:33.000 13:17:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.000 13:17:30 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:33.000 13:17:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.000 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:06:33.000 ************************************ 00:06:33.000 START TEST thread_poller_perf 00:06:33.000 ************************************ 00:06:33.000 13:17:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.000 [2024-07-26 13:17:30.389022] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:33.000 [2024-07-26 13:17:30.389124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759201 ] 00:06:33.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.000 [2024-07-26 13:17:30.458666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.261 [2024-07-26 13:17:30.495211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.261 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.205 ====================================== 00:06:34.205 busy:2414176408 (cyc) 00:06:34.205 total_run_count: 276000 00:06:34.205 tsc_hz: 2400000000 (cyc) 00:06:34.205 ====================================== 00:06:34.205 poller_cost: 8747 (cyc), 3644 (nsec) 00:06:34.205 00:06:34.205 real 0m1.177s 00:06:34.205 user 0m1.105s 00:06:34.205 sys 0m0.068s 00:06:34.205 13:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.205 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 ************************************ 00:06:34.205 END TEST thread_poller_perf 00:06:34.205 ************************************ 00:06:34.205 13:17:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.205 13:17:31 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:34.205 13:17:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.205 13:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:34.205 ************************************ 00:06:34.205 START TEST thread_poller_perf 00:06:34.205 ************************************ 00:06:34.205 13:17:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.205 [2024-07-26 13:17:31.609189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.205 [2024-07-26 13:17:31.609281] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759551 ] 00:06:34.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.205 [2024-07-26 13:17:31.672485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.466 [2024-07-26 13:17:31.699774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.466 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.479 ====================================== 00:06:35.479 busy:2402254362 (cyc) 00:06:35.479 total_run_count: 3808000 00:06:35.479 tsc_hz: 2400000000 (cyc) 00:06:35.479 ====================================== 00:06:35.479 poller_cost: 630 (cyc), 262 (nsec) 00:06:35.479 00:06:35.479 real 0m1.152s 00:06:35.479 user 0m1.081s 00:06:35.479 sys 0m0.067s 00:06:35.479 13:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.479 13:17:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.479 ************************************ 00:06:35.479 END TEST thread_poller_perf 00:06:35.479 ************************************ 00:06:35.479 13:17:32 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.479 00:06:35.479 real 0m2.506s 00:06:35.479 user 0m2.249s 00:06:35.479 sys 0m0.268s 00:06:35.479 13:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.479 13:17:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.479 ************************************ 00:06:35.479 END TEST thread 00:06:35.479 ************************************ 00:06:35.479 13:17:32 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:35.479 13:17:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.479 13:17:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.479 13:17:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.479 ************************************ 00:06:35.479 START TEST accel 00:06:35.479 ************************************ 00:06:35.479 13:17:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:35.479 * Looking for test storage... 00:06:35.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:35.479 13:17:32 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:35.479 13:17:32 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:35.479 13:17:32 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.479 13:17:32 -- accel/accel.sh@59 -- # spdk_tgt_pid=759942 00:06:35.479 13:17:32 -- accel/accel.sh@60 -- # waitforlisten 759942 00:06:35.479 13:17:32 -- common/autotest_common.sh@819 -- # '[' -z 759942 ']' 00:06:35.479 13:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.479 13:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.479 13:17:32 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:35.479 13:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.479 13:17:32 -- accel/accel.sh@58 -- # build_accel_config 00:06:35.479 13:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.479 13:17:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.479 13:17:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.479 13:17:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.479 13:17:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.479 13:17:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.479 13:17:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.479 13:17:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.479 13:17:32 -- accel/accel.sh@42 -- # jq -r . 00:06:35.741 [2024-07-26 13:17:32.970190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.741 [2024-07-26 13:17:32.970265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759942 ] 00:06:35.741 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.741 [2024-07-26 13:17:33.033713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.741 [2024-07-26 13:17:33.070537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.741 [2024-07-26 13:17:33.070688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.312 13:17:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.312 13:17:33 -- common/autotest_common.sh@852 -- # return 0 00:06:36.312 13:17:33 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:36.312 13:17:33 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:36.312 13:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.312 13:17:33 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:36.312 13:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:36.312 13:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.312 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.312 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.312 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # IFS== 00:06:36.313 13:17:33 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.313 13:17:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.313 13:17:33 -- accel/accel.sh@67 -- # killprocess 759942 00:06:36.313 13:17:33 -- common/autotest_common.sh@926 -- # '[' -z 759942 ']' 00:06:36.313 13:17:33 -- common/autotest_common.sh@930 -- # kill -0 759942 00:06:36.313 13:17:33 -- common/autotest_common.sh@931 -- # uname 00:06:36.574 13:17:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.574 13:17:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 759942 00:06:36.574 13:17:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.574 13:17:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.574 13:17:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 759942' 00:06:36.574 killing process with pid 759942 00:06:36.574 13:17:33 -- common/autotest_common.sh@945 -- # kill 759942 00:06:36.574 13:17:33 -- common/autotest_common.sh@950 -- # wait 759942 00:06:36.574 13:17:34 -- accel/accel.sh@68 -- # trap - ERR 00:06:36.574 13:17:34 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:36.574 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:36.574 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.574 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.574 13:17:34 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:36.574 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:36.574 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.574 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.574 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.574 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.574 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.574 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.574 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.574 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:36.835 13:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.835 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.835 13:17:34 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:36.835 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:36.835 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.835 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.835 ************************************ 00:06:36.835 START TEST accel_missing_filename 00:06:36.835 ************************************ 00:06:36.835 13:17:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:36.835 13:17:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.835 13:17:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:36.835 13:17:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:36.835 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.835 13:17:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:36.835 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.835 13:17:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:36.835 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:36.835 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.835 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.835 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.835 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.835 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.835 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.835 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.835 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:36.835 [2024-07-26 13:17:34.140976] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.835 [2024-07-26 13:17:34.141068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760163 ] 00:06:36.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.835 [2024-07-26 13:17:34.216424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.835 [2024-07-26 13:17:34.250697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.835 [2024-07-26 13:17:34.283721] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.097 [2024-07-26 13:17:34.322101] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.097 A filename is required. 00:06:37.097 13:17:34 -- common/autotest_common.sh@643 -- # es=234 00:06:37.097 13:17:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.097 13:17:34 -- common/autotest_common.sh@652 -- # es=106 00:06:37.097 13:17:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:37.097 13:17:34 -- common/autotest_common.sh@660 -- # es=1 00:06:37.097 13:17:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.097 00:06:37.097 real 0m0.253s 00:06:37.097 user 0m0.177s 00:06:37.097 sys 0m0.119s 00:06:37.097 13:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.097 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.097 ************************************ 00:06:37.097 END TEST accel_missing_filename 00:06:37.097 ************************************ 00:06:37.097 13:17:34 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.097 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:37.097 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.097 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.097 ************************************ 00:06:37.097 START TEST accel_compress_verify 00:06:37.097 ************************************ 00:06:37.097 13:17:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.097 13:17:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.097 13:17:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.097 13:17:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.097 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.097 13:17:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.097 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.097 13:17:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.097 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.097 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.097 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.097 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.097 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.097 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.097 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.097 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.097 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:37.097 [2024-07-26 13:17:34.435228] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.097 [2024-07-26 13:17:34.435295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760337 ] 00:06:37.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.097 [2024-07-26 13:17:34.495517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.097 [2024-07-26 13:17:34.522908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.097 [2024-07-26 13:17:34.554612] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.359 [2024-07-26 13:17:34.591412] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.359 00:06:37.359 Compression does not support the verify option, aborting. 00:06:37.359 13:17:34 -- common/autotest_common.sh@643 -- # es=161 00:06:37.359 13:17:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.359 13:17:34 -- common/autotest_common.sh@652 -- # es=33 00:06:37.359 13:17:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:37.359 13:17:34 -- common/autotest_common.sh@660 -- # es=1 00:06:37.359 13:17:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.359 00:06:37.359 real 0m0.226s 00:06:37.359 user 0m0.168s 00:06:37.359 sys 0m0.100s 00:06:37.359 13:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.359 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.359 ************************************ 00:06:37.359 END TEST accel_compress_verify 00:06:37.359 ************************************ 00:06:37.359 13:17:34 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:37.359 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:37.359 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.359 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.359 ************************************ 00:06:37.360 START TEST accel_wrong_workload 00:06:37.360 ************************************ 00:06:37.360 13:17:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:37.360 13:17:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.360 13:17:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:37.360 13:17:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.360 13:17:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.360 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.360 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.360 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.360 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:37.360 Unsupported workload type: foobar 00:06:37.360 [2024-07-26 13:17:34.696560] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:37.360 accel_perf options: 00:06:37.360 [-h help message] 00:06:37.360 [-q queue depth per core] 00:06:37.360 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.360 [-T number of threads per core 00:06:37.360 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.360 [-t time in seconds] 00:06:37.360 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.360 [ dif_verify, , dif_generate, dif_generate_copy 00:06:37.360 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.360 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.360 [-S for crc32c workload, use this seed value (default 0) 00:06:37.360 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.360 [-f for fill workload, use this BYTE value (default 255) 00:06:37.360 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.360 [-y verify result if this switch is on] 00:06:37.360 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.360 Can be used to spread operations across a wider range of memory. 00:06:37.360 13:17:34 -- common/autotest_common.sh@643 -- # es=1 00:06:37.360 13:17:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.360 13:17:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.360 13:17:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.360 00:06:37.360 real 0m0.032s 00:06:37.360 user 0m0.021s 00:06:37.360 sys 0m0.011s 00:06:37.360 13:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.360 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.360 ************************************ 00:06:37.360 END TEST accel_wrong_workload 00:06:37.360 ************************************ 00:06:37.360 Error: writing output failed: Broken pipe 00:06:37.360 13:17:34 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.360 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:37.360 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.360 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.360 ************************************ 00:06:37.360 START TEST accel_negative_buffers 00:06:37.360 ************************************ 00:06:37.360 13:17:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:37.360 13:17:34 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.360 13:17:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:37.360 13:17:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.360 13:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.360 13:17:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.360 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.360 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.360 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.360 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:37.360 -x option must be non-negative. 00:06:37.360 [2024-07-26 13:17:34.761959] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:37.360 accel_perf options: 00:06:37.360 [-h help message] 00:06:37.360 [-q queue depth per core] 00:06:37.360 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:37.360 [-T number of threads per core 00:06:37.360 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:37.360 [-t time in seconds] 00:06:37.360 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:37.360 [ dif_verify, , dif_generate, dif_generate_copy 00:06:37.360 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:37.360 [-l for compress/decompress workloads, name of uncompressed input file 00:06:37.360 [-S for crc32c workload, use this seed value (default 0) 00:06:37.360 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:37.360 [-f for fill workload, use this BYTE value (default 255) 00:06:37.360 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:37.360 [-y verify result if this switch is on] 00:06:37.360 [-a tasks to allocate per core (default: same value as -q)] 00:06:37.360 Can be used to spread operations across a wider range of memory. 00:06:37.360 13:17:34 -- common/autotest_common.sh@643 -- # es=1 00:06:37.360 13:17:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.360 13:17:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.360 13:17:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.360 00:06:37.360 real 0m0.031s 00:06:37.360 user 0m0.019s 00:06:37.360 sys 0m0.012s 00:06:37.360 13:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.360 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.360 ************************************ 00:06:37.360 END TEST accel_negative_buffers 00:06:37.360 ************************************ 00:06:37.360 Error: writing output failed: Broken pipe 00:06:37.360 13:17:34 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:37.360 13:17:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:37.360 13:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.360 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.360 ************************************ 00:06:37.360 START TEST accel_crc32c 00:06:37.360 ************************************ 00:06:37.360 13:17:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:37.360 13:17:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.360 13:17:34 -- accel/accel.sh@17 -- # local accel_module 00:06:37.360 13:17:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:37.360 13:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.360 13:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.360 13:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.360 13:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.360 13:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.360 13:17:34 -- accel/accel.sh@42 -- # jq -r . 00:06:37.360 [2024-07-26 13:17:34.830241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.360 [2024-07-26 13:17:34.830306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760394 ] 00:06:37.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.622 [2024-07-26 13:17:34.893647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.622 [2024-07-26 13:17:34.928946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.010 13:17:36 -- accel/accel.sh@18 -- # out=' 00:06:39.010 SPDK Configuration: 00:06:39.010 Core mask: 0x1 00:06:39.010 00:06:39.010 Accel Perf Configuration: 00:06:39.010 Workload Type: crc32c 00:06:39.010 CRC-32C seed: 32 00:06:39.010 Transfer size: 4096 bytes 00:06:39.010 Vector count 1 00:06:39.010 Module: software 00:06:39.010 Queue depth: 32 00:06:39.010 Allocate depth: 32 00:06:39.010 # threads/core: 1 00:06:39.010 Run time: 1 seconds 00:06:39.010 Verify: Yes 00:06:39.010 00:06:39.010 Running for 1 seconds... 00:06:39.010 00:06:39.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.010 ------------------------------------------------------------------------------------ 00:06:39.010 0,0 448992/s 1753 MiB/s 0 0 00:06:39.010 ==================================================================================== 00:06:39.010 Total 448992/s 1753 MiB/s 0 0' 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.010 13:17:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.010 13:17:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.010 13:17:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.010 13:17:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.010 13:17:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.010 13:17:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.010 13:17:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.010 13:17:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.010 13:17:36 -- accel/accel.sh@42 -- # jq -r . 00:06:39.010 [2024-07-26 13:17:36.068031] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.010 [2024-07-26 13:17:36.068107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760714 ] 00:06:39.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.010 [2024-07-26 13:17:36.127772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.010 [2024-07-26 13:17:36.157585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=0x1 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=crc32c 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=32 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=software 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=32 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=32 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=1 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val=Yes 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.010 13:17:36 -- accel/accel.sh@21 -- # val= 00:06:39.010 13:17:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.010 13:17:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@21 -- # val= 00:06:39.954 13:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # IFS=: 00:06:39.954 13:17:37 -- accel/accel.sh@20 -- # read -r var val 00:06:39.954 13:17:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.954 13:17:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:39.954 13:17:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.954 00:06:39.954 real 0m2.464s 00:06:39.954 user 0m2.264s 00:06:39.954 sys 0m0.197s 00:06:39.954 13:17:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.954 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.954 ************************************ 00:06:39.954 END TEST accel_crc32c 00:06:39.954 ************************************ 00:06:39.954 13:17:37 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:39.954 13:17:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:39.954 13:17:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.954 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.954 ************************************ 00:06:39.954 START TEST accel_crc32c_C2 00:06:39.954 ************************************ 00:06:39.954 13:17:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:39.954 13:17:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.954 13:17:37 -- accel/accel.sh@17 -- # local accel_module 00:06:39.954 13:17:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:39.954 13:17:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:39.954 13:17:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.954 13:17:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.954 13:17:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.954 13:17:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.954 13:17:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.954 13:17:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.954 13:17:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.954 13:17:37 -- accel/accel.sh@42 -- # jq -r . 00:06:39.954 [2024-07-26 13:17:37.334286] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.954 [2024-07-26 13:17:37.334378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760857 ] 00:06:39.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.954 [2024-07-26 13:17:37.395012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.954 [2024-07-26 13:17:37.424277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.341 13:17:38 -- accel/accel.sh@18 -- # out=' 00:06:41.341 SPDK Configuration: 00:06:41.341 Core mask: 0x1 00:06:41.341 00:06:41.341 Accel Perf Configuration: 00:06:41.341 Workload Type: crc32c 00:06:41.341 CRC-32C seed: 0 00:06:41.341 Transfer size: 4096 bytes 00:06:41.341 Vector count 2 00:06:41.341 Module: software 00:06:41.342 Queue depth: 32 00:06:41.342 Allocate depth: 32 00:06:41.342 # threads/core: 1 00:06:41.342 Run time: 1 seconds 00:06:41.342 Verify: Yes 00:06:41.342 00:06:41.342 Running for 1 seconds... 00:06:41.342 00:06:41.342 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.342 ------------------------------------------------------------------------------------ 00:06:41.342 0,0 374944/s 2929 MiB/s 0 0 00:06:41.342 ==================================================================================== 00:06:41.342 Total 374944/s 1464 MiB/s 0 0' 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:41.342 13:17:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:41.342 13:17:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.342 13:17:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.342 13:17:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.342 13:17:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.342 13:17:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.342 13:17:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.342 13:17:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.342 13:17:38 -- accel/accel.sh@42 -- # jq -r . 00:06:41.342 [2024-07-26 13:17:38.563592] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:41.342 [2024-07-26 13:17:38.563689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761101 ] 00:06:41.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.342 [2024-07-26 13:17:38.624543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.342 [2024-07-26 13:17:38.652682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=0x1 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=crc32c 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=0 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=software 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=32 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=32 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=1 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val=Yes 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:41.342 13:17:38 -- accel/accel.sh@21 -- # val= 00:06:41.342 13:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # IFS=: 00:06:41.342 13:17:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@21 -- # val= 00:06:42.729 13:17:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # IFS=: 00:06:42.729 13:17:39 -- accel/accel.sh@20 -- # read -r var val 00:06:42.729 13:17:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.729 13:17:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:42.729 13:17:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.729 00:06:42.729 real 0m2.457s 00:06:42.729 user 0m1.138s 00:06:42.729 sys 0m0.092s 00:06:42.729 13:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.729 13:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 END TEST accel_crc32c_C2 00:06:42.729 ************************************ 00:06:42.729 13:17:39 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.729 13:17:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:42.729 13:17:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.729 13:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 START TEST accel_copy 00:06:42.729 ************************************ 00:06:42.729 13:17:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:42.729 13:17:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.729 13:17:39 -- accel/accel.sh@17 -- # local accel_module 00:06:42.729 13:17:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:42.729 13:17:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.729 13:17:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.729 13:17:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.729 13:17:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.729 13:17:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.729 13:17:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.729 13:17:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.729 13:17:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.729 13:17:39 -- accel/accel.sh@42 -- # jq -r . 00:06:42.729 [2024-07-26 13:17:39.828120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.729 [2024-07-26 13:17:39.828219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761452 ] 00:06:42.729 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.729 [2024-07-26 13:17:39.889753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.729 [2024-07-26 13:17:39.917511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.673 13:17:41 -- accel/accel.sh@18 -- # out=' 00:06:43.673 SPDK Configuration: 00:06:43.673 Core mask: 0x1 00:06:43.673 00:06:43.673 Accel Perf Configuration: 00:06:43.673 Workload Type: copy 00:06:43.673 Transfer size: 4096 bytes 00:06:43.673 Vector count 1 00:06:43.673 Module: software 00:06:43.673 Queue depth: 32 00:06:43.673 Allocate depth: 32 00:06:43.673 # threads/core: 1 00:06:43.673 Run time: 1 seconds 00:06:43.673 Verify: Yes 00:06:43.673 00:06:43.673 Running for 1 seconds... 00:06:43.673 00:06:43.673 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.673 ------------------------------------------------------------------------------------ 00:06:43.673 0,0 303392/s 1185 MiB/s 0 0 00:06:43.673 ==================================================================================== 00:06:43.673 Total 303392/s 1185 MiB/s 0 0' 00:06:43.673 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.673 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.673 13:17:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:43.673 13:17:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.673 13:17:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.673 13:17:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.673 13:17:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.673 13:17:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.673 13:17:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.673 13:17:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.673 13:17:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.673 13:17:41 -- accel/accel.sh@42 -- # jq -r . 00:06:43.673 [2024-07-26 13:17:41.057149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:43.673 [2024-07-26 13:17:41.057257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761748 ] 00:06:43.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.673 [2024-07-26 13:17:41.117759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.673 [2024-07-26 13:17:41.145893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=0x1 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=copy 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=software 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=32 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=32 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=1 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val=Yes 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:43.935 13:17:41 -- accel/accel.sh@21 -- # val= 00:06:43.935 13:17:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # IFS=: 00:06:43.935 13:17:41 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.879 13:17:42 -- accel/accel.sh@21 -- # val= 00:06:44.879 13:17:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.879 13:17:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.880 13:17:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.880 13:17:42 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:44.880 13:17:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.880 00:06:44.880 real 0m2.456s 00:06:44.880 user 0m2.259s 00:06:44.880 sys 0m0.192s 00:06:44.880 13:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.880 13:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.880 ************************************ 00:06:44.880 END TEST accel_copy 00:06:44.880 ************************************ 00:06:44.880 13:17:42 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.880 13:17:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:44.880 13:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.880 13:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.880 ************************************ 00:06:44.880 START TEST accel_fill 00:06:44.880 ************************************ 00:06:44.880 13:17:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.880 13:17:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.880 13:17:42 -- accel/accel.sh@17 -- # local accel_module 00:06:44.880 13:17:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.880 13:17:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.880 13:17:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.880 13:17:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.880 13:17:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.880 13:17:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.880 13:17:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.880 13:17:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.880 13:17:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.880 13:17:42 -- accel/accel.sh@42 -- # jq -r . 00:06:44.880 [2024-07-26 13:17:42.321444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.880 [2024-07-26 13:17:42.321517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761878 ] 00:06:44.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.141 [2024-07-26 13:17:42.381642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.141 [2024-07-26 13:17:42.411126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.083 13:17:43 -- accel/accel.sh@18 -- # out=' 00:06:46.083 SPDK Configuration: 00:06:46.083 Core mask: 0x1 00:06:46.083 00:06:46.083 Accel Perf Configuration: 00:06:46.083 Workload Type: fill 00:06:46.083 Fill pattern: 0x80 00:06:46.083 Transfer size: 4096 bytes 00:06:46.083 Vector count 1 00:06:46.083 Module: software 00:06:46.083 Queue depth: 64 00:06:46.083 Allocate depth: 64 00:06:46.083 # threads/core: 1 00:06:46.083 Run time: 1 seconds 00:06:46.083 Verify: Yes 00:06:46.083 00:06:46.083 Running for 1 seconds... 00:06:46.083 00:06:46.083 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.083 ------------------------------------------------------------------------------------ 00:06:46.083 0,0 471232/s 1840 MiB/s 0 0 00:06:46.083 ==================================================================================== 00:06:46.083 Total 471232/s 1840 MiB/s 0 0' 00:06:46.083 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.083 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.083 13:17:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.083 13:17:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.083 13:17:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.083 13:17:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.083 13:17:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.083 13:17:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.083 13:17:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.083 13:17:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.083 13:17:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.083 13:17:43 -- accel/accel.sh@42 -- # jq -r . 00:06:46.083 [2024-07-26 13:17:43.548517] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.083 [2024-07-26 13:17:43.548591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762159 ] 00:06:46.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.345 [2024-07-26 13:17:43.608265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.345 [2024-07-26 13:17:43.637158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.345 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.345 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.345 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.345 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.345 13:17:43 -- accel/accel.sh@21 -- # val=0x1 00:06:46.345 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.345 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.345 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.345 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.345 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.345 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=fill 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=0x80 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=software 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=64 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=64 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=1 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val=Yes 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:46.346 13:17:43 -- accel/accel.sh@21 -- # val= 00:06:46.346 13:17:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # IFS=: 00:06:46.346 13:17:43 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@21 -- # val= 00:06:47.290 13:17:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.290 13:17:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.290 13:17:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.290 13:17:44 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:47.290 13:17:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.290 00:06:47.290 real 0m2.453s 00:06:47.290 user 0m2.261s 00:06:47.290 sys 0m0.188s 00:06:47.290 13:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.290 13:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.290 ************************************ 00:06:47.290 END TEST accel_fill 00:06:47.290 ************************************ 00:06:47.551 13:17:44 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:47.551 13:17:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:47.551 13:17:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.551 13:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:47.551 ************************************ 00:06:47.551 START TEST accel_copy_crc32c 00:06:47.551 ************************************ 00:06:47.551 13:17:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:47.551 13:17:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.551 13:17:44 -- accel/accel.sh@17 -- # local accel_module 00:06:47.551 13:17:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:47.551 13:17:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:47.551 13:17:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.551 13:17:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.551 13:17:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.551 13:17:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.551 13:17:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.551 13:17:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.551 13:17:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.551 13:17:44 -- accel/accel.sh@42 -- # jq -r . 00:06:47.551 [2024-07-26 13:17:44.813488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.551 [2024-07-26 13:17:44.813575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762511 ] 00:06:47.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.551 [2024-07-26 13:17:44.874420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.551 [2024-07-26 13:17:44.902280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.939 13:17:46 -- accel/accel.sh@18 -- # out=' 00:06:48.939 SPDK Configuration: 00:06:48.939 Core mask: 0x1 00:06:48.939 00:06:48.939 Accel Perf Configuration: 00:06:48.939 Workload Type: copy_crc32c 00:06:48.939 CRC-32C seed: 0 00:06:48.939 Vector size: 4096 bytes 00:06:48.939 Transfer size: 4096 bytes 00:06:48.939 Vector count 1 00:06:48.939 Module: software 00:06:48.939 Queue depth: 32 00:06:48.939 Allocate depth: 32 00:06:48.939 # threads/core: 1 00:06:48.939 Run time: 1 seconds 00:06:48.939 Verify: Yes 00:06:48.939 00:06:48.939 Running for 1 seconds... 00:06:48.939 00:06:48.939 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.939 ------------------------------------------------------------------------------------ 00:06:48.939 0,0 246976/s 964 MiB/s 0 0 00:06:48.939 ==================================================================================== 00:06:48.939 Total 246976/s 964 MiB/s 0 0' 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:48.939 13:17:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:48.939 13:17:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.939 13:17:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.939 13:17:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.939 13:17:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.939 13:17:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.939 13:17:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.939 13:17:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.939 13:17:46 -- accel/accel.sh@42 -- # jq -r . 00:06:48.939 [2024-07-26 13:17:46.040399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:48.939 [2024-07-26 13:17:46.040475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762781 ] 00:06:48.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.939 [2024-07-26 13:17:46.099956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.939 [2024-07-26 13:17:46.127902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val=0x1 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val=0 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 13:17:46 -- accel/accel.sh@21 -- # val=software 00:06:48.939 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val=32 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val=32 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val=1 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val=Yes 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:48.940 13:17:46 -- accel/accel.sh@21 -- # val= 00:06:48.940 13:17:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # IFS=: 00:06:48.940 13:17:46 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@21 -- # val= 00:06:49.884 13:17:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # IFS=: 00:06:49.884 13:17:47 -- accel/accel.sh@20 -- # read -r var val 00:06:49.884 13:17:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.884 13:17:47 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:49.884 13:17:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.884 00:06:49.884 real 0m2.454s 00:06:49.884 user 0m2.251s 00:06:49.884 sys 0m0.199s 00:06:49.884 13:17:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.884 13:17:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.884 ************************************ 00:06:49.884 END TEST accel_copy_crc32c 00:06:49.884 ************************************ 00:06:49.884 13:17:47 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.884 13:17:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:49.884 13:17:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.884 13:17:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.884 ************************************ 00:06:49.884 START TEST accel_copy_crc32c_C2 00:06:49.884 ************************************ 00:06:49.884 13:17:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:49.884 13:17:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.884 13:17:47 -- accel/accel.sh@17 -- # local accel_module 00:06:49.884 13:17:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:49.884 13:17:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:49.884 13:17:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.884 13:17:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.884 13:17:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.884 13:17:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.884 13:17:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.884 13:17:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.884 13:17:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.884 13:17:47 -- accel/accel.sh@42 -- # jq -r . 00:06:49.884 [2024-07-26 13:17:47.309081] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.884 [2024-07-26 13:17:47.309151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762928 ] 00:06:49.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.146 [2024-07-26 13:17:47.370512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.146 [2024-07-26 13:17:47.401108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.090 13:17:48 -- accel/accel.sh@18 -- # out=' 00:06:51.090 SPDK Configuration: 00:06:51.090 Core mask: 0x1 00:06:51.090 00:06:51.090 Accel Perf Configuration: 00:06:51.090 Workload Type: copy_crc32c 00:06:51.090 CRC-32C seed: 0 00:06:51.090 Vector size: 4096 bytes 00:06:51.090 Transfer size: 8192 bytes 00:06:51.090 Vector count 2 00:06:51.090 Module: software 00:06:51.090 Queue depth: 32 00:06:51.090 Allocate depth: 32 00:06:51.090 # threads/core: 1 00:06:51.090 Run time: 1 seconds 00:06:51.090 Verify: Yes 00:06:51.090 00:06:51.090 Running for 1 seconds... 00:06:51.090 00:06:51.090 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.090 ------------------------------------------------------------------------------------ 00:06:51.091 0,0 187520/s 1465 MiB/s 0 0 00:06:51.091 ==================================================================================== 00:06:51.091 Total 187520/s 732 MiB/s 0 0' 00:06:51.091 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.091 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.091 13:17:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:51.091 13:17:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:51.091 13:17:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.091 13:17:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.091 13:17:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.091 13:17:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.091 13:17:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.091 13:17:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.091 13:17:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.091 13:17:48 -- accel/accel.sh@42 -- # jq -r . 00:06:51.091 [2024-07-26 13:17:48.541908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.091 [2024-07-26 13:17:48.542003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763216 ] 00:06:51.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.352 [2024-07-26 13:17:48.605555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.353 [2024-07-26 13:17:48.633059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=0x1 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=0 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=software 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=32 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=32 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=1 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val=Yes 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.353 13:17:48 -- accel/accel.sh@21 -- # val= 00:06:51.353 13:17:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.353 13:17:48 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@21 -- # val= 00:06:52.347 13:17:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # IFS=: 00:06:52.347 13:17:49 -- accel/accel.sh@20 -- # read -r var val 00:06:52.347 13:17:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.347 13:17:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:52.347 13:17:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.347 00:06:52.347 real 0m2.463s 00:06:52.347 user 0m1.128s 00:06:52.347 sys 0m0.104s 00:06:52.347 13:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.347 13:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 END TEST accel_copy_crc32c_C2 00:06:52.347 ************************************ 00:06:52.347 13:17:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:52.347 13:17:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:52.347 13:17:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.347 13:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:52.347 ************************************ 00:06:52.347 START TEST accel_dualcast 00:06:52.347 ************************************ 00:06:52.347 13:17:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:52.347 13:17:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.347 13:17:49 -- accel/accel.sh@17 -- # local accel_module 00:06:52.347 13:17:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:52.347 13:17:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:52.347 13:17:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.347 13:17:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.347 13:17:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.347 13:17:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.347 13:17:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.347 13:17:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.347 13:17:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.347 13:17:49 -- accel/accel.sh@42 -- # jq -r . 00:06:52.347 [2024-07-26 13:17:49.813033] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:52.347 [2024-07-26 13:17:49.813119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763573 ] 00:06:52.608 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.608 [2024-07-26 13:17:49.874656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.608 [2024-07-26 13:17:49.902679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.550 13:17:51 -- accel/accel.sh@18 -- # out=' 00:06:53.550 SPDK Configuration: 00:06:53.550 Core mask: 0x1 00:06:53.550 00:06:53.550 Accel Perf Configuration: 00:06:53.550 Workload Type: dualcast 00:06:53.550 Transfer size: 4096 bytes 00:06:53.550 Vector count 1 00:06:53.550 Module: software 00:06:53.550 Queue depth: 32 00:06:53.550 Allocate depth: 32 00:06:53.550 # threads/core: 1 00:06:53.550 Run time: 1 seconds 00:06:53.550 Verify: Yes 00:06:53.550 00:06:53.550 Running for 1 seconds... 00:06:53.550 00:06:53.550 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.550 ------------------------------------------------------------------------------------ 00:06:53.550 0,0 365376/s 1427 MiB/s 0 0 00:06:53.550 ==================================================================================== 00:06:53.550 Total 365376/s 1427 MiB/s 0 0' 00:06:53.550 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.550 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.550 13:17:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:53.550 13:17:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:53.550 13:17:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.550 13:17:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.550 13:17:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.550 13:17:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.550 13:17:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.550 13:17:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.550 13:17:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.550 13:17:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.812 [2024-07-26 13:17:51.041685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.812 [2024-07-26 13:17:51.041760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763838 ] 00:06:53.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.812 [2024-07-26 13:17:51.101378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.812 [2024-07-26 13:17:51.129700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=0x1 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=dualcast 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=software 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=32 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=32 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=1 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val=Yes 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:53.812 13:17:51 -- accel/accel.sh@21 -- # val= 00:06:53.812 13:17:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # IFS=: 00:06:53.812 13:17:51 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@21 -- # val= 00:06:55.199 13:17:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.199 13:17:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.199 13:17:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.199 13:17:52 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:55.199 13:17:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.199 00:06:55.199 real 0m2.456s 00:06:55.199 user 0m2.254s 00:06:55.199 sys 0m0.197s 00:06:55.199 13:17:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.199 13:17:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.199 ************************************ 00:06:55.199 END TEST accel_dualcast 00:06:55.199 ************************************ 00:06:55.199 13:17:52 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:55.199 13:17:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.199 13:17:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.199 13:17:52 -- common/autotest_common.sh@10 -- # set +x 00:06:55.199 ************************************ 00:06:55.199 START TEST accel_compare 00:06:55.199 ************************************ 00:06:55.199 13:17:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:55.199 13:17:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.199 13:17:52 -- accel/accel.sh@17 -- # local accel_module 00:06:55.199 13:17:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:55.199 13:17:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:55.199 13:17:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.199 13:17:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.199 13:17:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.199 13:17:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.199 13:17:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.199 13:17:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.199 13:17:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.199 13:17:52 -- accel/accel.sh@42 -- # jq -r . 00:06:55.199 [2024-07-26 13:17:52.305343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:55.199 [2024-07-26 13:17:52.305427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763980 ] 00:06:55.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.199 [2024-07-26 13:17:52.364900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.199 [2024-07-26 13:17:52.394257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.141 13:17:53 -- accel/accel.sh@18 -- # out=' 00:06:56.141 SPDK Configuration: 00:06:56.141 Core mask: 0x1 00:06:56.141 00:06:56.141 Accel Perf Configuration: 00:06:56.141 Workload Type: compare 00:06:56.141 Transfer size: 4096 bytes 00:06:56.141 Vector count 1 00:06:56.141 Module: software 00:06:56.141 Queue depth: 32 00:06:56.141 Allocate depth: 32 00:06:56.141 # threads/core: 1 00:06:56.141 Run time: 1 seconds 00:06:56.141 Verify: Yes 00:06:56.141 00:06:56.141 Running for 1 seconds... 00:06:56.141 00:06:56.141 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.141 ------------------------------------------------------------------------------------ 00:06:56.141 0,0 436608/s 1705 MiB/s 0 0 00:06:56.141 ==================================================================================== 00:06:56.141 Total 436608/s 1705 MiB/s 0 0' 00:06:56.141 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.141 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.141 13:17:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:56.141 13:17:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:56.141 13:17:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.141 13:17:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.141 13:17:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.141 13:17:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.141 13:17:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.141 13:17:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.141 13:17:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.141 13:17:53 -- accel/accel.sh@42 -- # jq -r . 00:06:56.141 [2024-07-26 13:17:53.530743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:56.141 [2024-07-26 13:17:53.530818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764276 ] 00:06:56.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.141 [2024-07-26 13:17:53.590204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.402 [2024-07-26 13:17:53.618018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val=0x1 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val=compare 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.402 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.402 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.402 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val=software 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val=32 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val=32 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val=1 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val=Yes 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.403 13:17:53 -- accel/accel.sh@21 -- # val= 00:06:56.403 13:17:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # IFS=: 00:06:56.403 13:17:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@21 -- # val= 00:06:57.345 13:17:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.345 13:17:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.345 13:17:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.345 13:17:54 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:57.345 13:17:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.345 00:06:57.345 real 0m2.450s 00:06:57.345 user 0m1.129s 00:06:57.345 sys 0m0.098s 00:06:57.345 13:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.345 13:17:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.345 ************************************ 00:06:57.346 END TEST accel_compare 00:06:57.346 ************************************ 00:06:57.346 13:17:54 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:57.346 13:17:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.346 13:17:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.346 13:17:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 ************************************ 00:06:57.346 START TEST accel_xor 00:06:57.346 ************************************ 00:06:57.346 13:17:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:57.346 13:17:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.346 13:17:54 -- accel/accel.sh@17 -- # local accel_module 00:06:57.346 13:17:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:57.346 13:17:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:57.346 13:17:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.346 13:17:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.346 13:17:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.346 13:17:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.346 13:17:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.346 13:17:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.346 13:17:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.346 13:17:54 -- accel/accel.sh@42 -- # jq -r . 00:06:57.346 [2024-07-26 13:17:54.795523] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:57.346 [2024-07-26 13:17:54.795615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764627 ] 00:06:57.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.606 [2024-07-26 13:17:54.857526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.606 [2024-07-26 13:17:54.887395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.549 13:17:55 -- accel/accel.sh@18 -- # out=' 00:06:58.549 SPDK Configuration: 00:06:58.549 Core mask: 0x1 00:06:58.549 00:06:58.549 Accel Perf Configuration: 00:06:58.549 Workload Type: xor 00:06:58.549 Source buffers: 2 00:06:58.549 Transfer size: 4096 bytes 00:06:58.549 Vector count 1 00:06:58.549 Module: software 00:06:58.549 Queue depth: 32 00:06:58.549 Allocate depth: 32 00:06:58.549 # threads/core: 1 00:06:58.549 Run time: 1 seconds 00:06:58.549 Verify: Yes 00:06:58.549 00:06:58.549 Running for 1 seconds... 00:06:58.549 00:06:58.549 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.549 ------------------------------------------------------------------------------------ 00:06:58.549 0,0 361312/s 1411 MiB/s 0 0 00:06:58.549 ==================================================================================== 00:06:58.549 Total 361312/s 1411 MiB/s 0 0' 00:06:58.549 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.549 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.549 13:17:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:58.549 13:17:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:58.549 13:17:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.549 13:17:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.549 13:17:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.549 13:17:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.549 13:17:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.549 13:17:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.549 13:17:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.549 13:17:56 -- accel/accel.sh@42 -- # jq -r . 00:06:58.810 [2024-07-26 13:17:56.025487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.810 [2024-07-26 13:17:56.025565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764875 ] 00:06:58.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.810 [2024-07-26 13:17:56.085029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.810 [2024-07-26 13:17:56.113159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val=0x1 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val=xor 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val=2 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.810 13:17:56 -- accel/accel.sh@21 -- # val=software 00:06:58.810 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.810 13:17:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.810 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val=32 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val=32 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val=1 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val=Yes 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.811 13:17:56 -- accel/accel.sh@21 -- # val= 00:06:58.811 13:17:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.811 13:17:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@21 -- # val= 00:06:59.754 13:17:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.754 13:17:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.754 13:17:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.754 13:17:57 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:59.754 13:17:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.754 00:06:59.754 real 0m2.457s 00:06:59.754 user 0m2.269s 00:06:59.754 sys 0m0.184s 00:06:59.754 13:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.754 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.754 ************************************ 00:06:59.754 END TEST accel_xor 00:06:59.754 ************************************ 00:07:00.016 13:17:57 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:00.016 13:17:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.016 13:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.016 13:17:57 -- common/autotest_common.sh@10 -- # set +x 00:07:00.016 ************************************ 00:07:00.016 START TEST accel_xor 00:07:00.016 ************************************ 00:07:00.016 13:17:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:00.016 13:17:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.016 13:17:57 -- accel/accel.sh@17 -- # local accel_module 00:07:00.016 13:17:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:00.016 13:17:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:00.016 13:17:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.016 13:17:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.016 13:17:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.016 13:17:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.016 13:17:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.016 13:17:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.016 13:17:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.016 13:17:57 -- accel/accel.sh@42 -- # jq -r . 00:07:00.016 [2024-07-26 13:17:57.289937] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.016 [2024-07-26 13:17:57.290013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765020 ] 00:07:00.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.016 [2024-07-26 13:17:57.350075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.016 [2024-07-26 13:17:57.379097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.404 13:17:58 -- accel/accel.sh@18 -- # out=' 00:07:01.404 SPDK Configuration: 00:07:01.404 Core mask: 0x1 00:07:01.404 00:07:01.404 Accel Perf Configuration: 00:07:01.404 Workload Type: xor 00:07:01.404 Source buffers: 3 00:07:01.404 Transfer size: 4096 bytes 00:07:01.404 Vector count 1 00:07:01.404 Module: software 00:07:01.404 Queue depth: 32 00:07:01.404 Allocate depth: 32 00:07:01.404 # threads/core: 1 00:07:01.404 Run time: 1 seconds 00:07:01.404 Verify: Yes 00:07:01.404 00:07:01.404 Running for 1 seconds... 00:07:01.404 00:07:01.404 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.404 ------------------------------------------------------------------------------------ 00:07:01.404 0,0 343008/s 1339 MiB/s 0 0 00:07:01.404 ==================================================================================== 00:07:01.404 Total 343008/s 1339 MiB/s 0 0' 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:01.404 13:17:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:01.404 13:17:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.404 13:17:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.404 13:17:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.404 13:17:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.404 13:17:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.404 13:17:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.404 13:17:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.404 13:17:58 -- accel/accel.sh@42 -- # jq -r . 00:07:01.404 [2024-07-26 13:17:58.516533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:01.404 [2024-07-26 13:17:58.516608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765332 ] 00:07:01.404 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.404 [2024-07-26 13:17:58.576018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.404 [2024-07-26 13:17:58.603505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=0x1 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=xor 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=3 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=software 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=32 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=32 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=1 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val=Yes 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.404 13:17:58 -- accel/accel.sh@21 -- # val= 00:07:01.404 13:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # IFS=: 00:07:01.404 13:17:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@21 -- # val= 00:07:02.349 13:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # IFS=: 00:07:02.349 13:17:59 -- accel/accel.sh@20 -- # read -r var val 00:07:02.349 13:17:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.349 13:17:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:02.349 13:17:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.349 00:07:02.349 real 0m2.452s 00:07:02.349 user 0m2.255s 00:07:02.349 sys 0m0.192s 00:07:02.349 13:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.349 13:17:59 -- common/autotest_common.sh@10 -- # set +x 00:07:02.349 ************************************ 00:07:02.349 END TEST accel_xor 00:07:02.349 ************************************ 00:07:02.349 13:17:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:02.349 13:17:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:02.349 13:17:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.349 13:17:59 -- common/autotest_common.sh@10 -- # set +x 00:07:02.349 ************************************ 00:07:02.349 START TEST accel_dif_verify 00:07:02.349 ************************************ 00:07:02.349 13:17:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:02.349 13:17:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.349 13:17:59 -- accel/accel.sh@17 -- # local accel_module 00:07:02.349 13:17:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:02.349 13:17:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:02.349 13:17:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.349 13:17:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.349 13:17:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.349 13:17:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.349 13:17:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.349 13:17:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.349 13:17:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.349 13:17:59 -- accel/accel.sh@42 -- # jq -r . 00:07:02.349 [2024-07-26 13:17:59.779349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:02.349 [2024-07-26 13:17:59.779439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765683 ] 00:07:02.349 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.610 [2024-07-26 13:17:59.839593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.610 [2024-07-26 13:17:59.867268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.554 13:18:00 -- accel/accel.sh@18 -- # out=' 00:07:03.554 SPDK Configuration: 00:07:03.554 Core mask: 0x1 00:07:03.554 00:07:03.554 Accel Perf Configuration: 00:07:03.554 Workload Type: dif_verify 00:07:03.554 Vector size: 4096 bytes 00:07:03.554 Transfer size: 4096 bytes 00:07:03.554 Block size: 512 bytes 00:07:03.554 Metadata size: 8 bytes 00:07:03.554 Vector count 1 00:07:03.554 Module: software 00:07:03.554 Queue depth: 32 00:07:03.554 Allocate depth: 32 00:07:03.554 # threads/core: 1 00:07:03.554 Run time: 1 seconds 00:07:03.554 Verify: No 00:07:03.554 00:07:03.554 Running for 1 seconds... 00:07:03.554 00:07:03.554 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.554 ------------------------------------------------------------------------------------ 00:07:03.554 0,0 93888/s 372 MiB/s 0 0 00:07:03.554 ==================================================================================== 00:07:03.554 Total 93888/s 366 MiB/s 0 0' 00:07:03.554 13:18:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.554 13:18:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.554 13:18:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.554 13:18:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.554 13:18:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.554 13:18:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.554 13:18:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.554 13:18:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.554 13:18:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.554 13:18:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.554 13:18:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.554 13:18:00 -- accel/accel.sh@42 -- # jq -r . 00:07:03.554 [2024-07-26 13:18:01.006623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.554 [2024-07-26 13:18:01.006716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765945 ] 00:07:03.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.815 [2024-07-26 13:18:01.067172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.815 [2024-07-26 13:18:01.095240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.815 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.815 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.815 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.815 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.815 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.815 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.815 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=0x1 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=dif_verify 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=software 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=32 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=32 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=1 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val=No 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.816 13:18:01 -- accel/accel.sh@21 -- # val= 00:07:03.816 13:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.816 13:18:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@21 -- # val= 00:07:04.757 13:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.757 13:18:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.757 13:18:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.757 13:18:02 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:04.757 13:18:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.757 00:07:04.757 real 0m2.456s 00:07:04.757 user 0m2.261s 00:07:04.757 sys 0m0.189s 00:07:04.757 13:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.757 13:18:02 -- common/autotest_common.sh@10 -- # set +x 00:07:04.757 ************************************ 00:07:04.757 END TEST accel_dif_verify 00:07:04.757 ************************************ 00:07:05.018 13:18:02 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:05.018 13:18:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:05.018 13:18:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.018 13:18:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.018 ************************************ 00:07:05.018 START TEST accel_dif_generate 00:07:05.018 ************************************ 00:07:05.018 13:18:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:05.018 13:18:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.018 13:18:02 -- accel/accel.sh@17 -- # local accel_module 00:07:05.018 13:18:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:05.018 13:18:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:05.018 13:18:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.018 13:18:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.018 13:18:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.018 13:18:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.018 13:18:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.018 13:18:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.018 13:18:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.018 13:18:02 -- accel/accel.sh@42 -- # jq -r . 00:07:05.018 [2024-07-26 13:18:02.279090] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:05.018 [2024-07-26 13:18:02.279180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766175 ] 00:07:05.018 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.018 [2024-07-26 13:18:02.341457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.018 [2024-07-26 13:18:02.372234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.417 13:18:03 -- accel/accel.sh@18 -- # out=' 00:07:06.417 SPDK Configuration: 00:07:06.417 Core mask: 0x1 00:07:06.417 00:07:06.417 Accel Perf Configuration: 00:07:06.417 Workload Type: dif_generate 00:07:06.417 Vector size: 4096 bytes 00:07:06.417 Transfer size: 4096 bytes 00:07:06.417 Block size: 512 bytes 00:07:06.417 Metadata size: 8 bytes 00:07:06.417 Vector count 1 00:07:06.417 Module: software 00:07:06.417 Queue depth: 32 00:07:06.417 Allocate depth: 32 00:07:06.417 # threads/core: 1 00:07:06.417 Run time: 1 seconds 00:07:06.417 Verify: No 00:07:06.417 00:07:06.417 Running for 1 seconds... 00:07:06.417 00:07:06.417 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.417 ------------------------------------------------------------------------------------ 00:07:06.417 0,0 114464/s 454 MiB/s 0 0 00:07:06.417 ==================================================================================== 00:07:06.417 Total 114464/s 447 MiB/s 0 0' 00:07:06.417 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.417 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.417 13:18:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:06.417 13:18:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:06.417 13:18:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.417 13:18:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.417 13:18:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.417 13:18:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.417 13:18:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.417 13:18:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.417 13:18:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.417 13:18:03 -- accel/accel.sh@42 -- # jq -r . 00:07:06.417 [2024-07-26 13:18:03.513573] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:06.417 [2024-07-26 13:18:03.513701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766498 ] 00:07:06.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.417 [2024-07-26 13:18:03.580655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.418 [2024-07-26 13:18:03.609304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=0x1 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=dif_generate 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=software 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=32 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=32 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=1 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val=No 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.418 13:18:03 -- accel/accel.sh@21 -- # val= 00:07:06.418 13:18:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.418 13:18:03 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@21 -- # val= 00:07:07.360 13:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # IFS=: 00:07:07.360 13:18:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.360 13:18:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.360 13:18:04 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:07.360 13:18:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.360 00:07:07.360 real 0m2.474s 00:07:07.360 user 0m2.275s 00:07:07.360 sys 0m0.206s 00:07:07.360 13:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.360 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.360 ************************************ 00:07:07.360 END TEST accel_dif_generate 00:07:07.360 ************************************ 00:07:07.360 13:18:04 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:07.360 13:18:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:07.360 13:18:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.360 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.360 ************************************ 00:07:07.360 START TEST accel_dif_generate_copy 00:07:07.360 ************************************ 00:07:07.361 13:18:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:07.361 13:18:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.361 13:18:04 -- accel/accel.sh@17 -- # local accel_module 00:07:07.361 13:18:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:07.361 13:18:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:07.361 13:18:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.361 13:18:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.361 13:18:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.361 13:18:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.361 13:18:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.361 13:18:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.361 13:18:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.361 13:18:04 -- accel/accel.sh@42 -- # jq -r . 00:07:07.361 [2024-07-26 13:18:04.795394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.361 [2024-07-26 13:18:04.795467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766848 ] 00:07:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.621 [2024-07-26 13:18:04.855216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.621 [2024-07-26 13:18:04.882893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.597 13:18:05 -- accel/accel.sh@18 -- # out=' 00:07:08.597 SPDK Configuration: 00:07:08.597 Core mask: 0x1 00:07:08.597 00:07:08.597 Accel Perf Configuration: 00:07:08.597 Workload Type: dif_generate_copy 00:07:08.597 Vector size: 4096 bytes 00:07:08.597 Transfer size: 4096 bytes 00:07:08.597 Vector count 1 00:07:08.597 Module: software 00:07:08.597 Queue depth: 32 00:07:08.597 Allocate depth: 32 00:07:08.597 # threads/core: 1 00:07:08.597 Run time: 1 seconds 00:07:08.597 Verify: No 00:07:08.597 00:07:08.597 Running for 1 seconds... 00:07:08.597 00:07:08.597 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.597 ------------------------------------------------------------------------------------ 00:07:08.597 0,0 87712/s 347 MiB/s 0 0 00:07:08.597 ==================================================================================== 00:07:08.597 Total 87712/s 342 MiB/s 0 0' 00:07:08.597 13:18:05 -- accel/accel.sh@20 -- # IFS=: 00:07:08.597 13:18:05 -- accel/accel.sh@20 -- # read -r var val 00:07:08.597 13:18:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:08.597 13:18:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:08.597 13:18:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.597 13:18:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.597 13:18:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.597 13:18:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.597 13:18:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.597 13:18:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.597 13:18:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.597 13:18:06 -- accel/accel.sh@42 -- # jq -r . 00:07:08.597 [2024-07-26 13:18:06.021264] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.597 [2024-07-26 13:18:06.021342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767047 ] 00:07:08.597 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.857 [2024-07-26 13:18:06.081331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.857 [2024-07-26 13:18:06.109542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=0x1 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=software 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=32 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=32 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=1 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val=No 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.857 13:18:06 -- accel/accel.sh@21 -- # val= 00:07:08.857 13:18:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.857 13:18:06 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@21 -- # val= 00:07:09.800 13:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.800 13:18:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.800 13:18:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.800 13:18:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:09.800 13:18:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.800 00:07:09.800 real 0m2.457s 00:07:09.800 user 0m2.267s 00:07:09.800 sys 0m0.197s 00:07:09.800 13:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.800 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.800 ************************************ 00:07:09.800 END TEST accel_dif_generate_copy 00:07:09.800 ************************************ 00:07:09.800 13:18:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:09.800 13:18:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.800 13:18:07 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:09.800 13:18:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.800 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.800 ************************************ 00:07:09.800 START TEST accel_comp 00:07:09.800 ************************************ 00:07:09.800 13:18:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.800 13:18:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.800 13:18:07 -- accel/accel.sh@17 -- # local accel_module 00:07:09.800 13:18:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.061 13:18:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.061 13:18:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.061 13:18:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.061 13:18:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.061 13:18:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.061 13:18:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.061 13:18:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.061 13:18:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.061 13:18:07 -- accel/accel.sh@42 -- # jq -r . 00:07:10.061 [2024-07-26 13:18:07.297024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.061 [2024-07-26 13:18:07.297101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767229 ] 00:07:10.061 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.061 [2024-07-26 13:18:07.369634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.061 [2024-07-26 13:18:07.402123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.448 13:18:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:11.448 00:07:11.448 SPDK Configuration: 00:07:11.448 Core mask: 0x1 00:07:11.448 00:07:11.448 Accel Perf Configuration: 00:07:11.448 Workload Type: compress 00:07:11.448 Transfer size: 4096 bytes 00:07:11.448 Vector count 1 00:07:11.448 Module: software 00:07:11.448 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.448 Queue depth: 32 00:07:11.448 Allocate depth: 32 00:07:11.448 # threads/core: 1 00:07:11.448 Run time: 1 seconds 00:07:11.448 Verify: No 00:07:11.448 00:07:11.448 Running for 1 seconds... 00:07:11.448 00:07:11.448 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.448 ------------------------------------------------------------------------------------ 00:07:11.448 0,0 47616/s 198 MiB/s 0 0 00:07:11.448 ==================================================================================== 00:07:11.448 Total 47616/s 186 MiB/s 0 0' 00:07:11.448 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.448 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.448 13:18:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.449 13:18:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.449 13:18:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.449 13:18:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.449 13:18:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.449 13:18:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.449 13:18:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.449 13:18:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.449 13:18:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.449 13:18:08 -- accel/accel.sh@42 -- # jq -r . 00:07:11.449 [2024-07-26 13:18:08.543397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:11.449 [2024-07-26 13:18:08.543474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767564 ] 00:07:11.449 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.449 [2024-07-26 13:18:08.603636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.449 [2024-07-26 13:18:08.631755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=0x1 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=compress 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=software 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=32 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=32 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=1 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val=No 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:11.449 13:18:08 -- accel/accel.sh@21 -- # val= 00:07:11.449 13:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # IFS=: 00:07:11.449 13:18:08 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@21 -- # val= 00:07:12.393 13:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.393 13:18:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.393 13:18:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.393 13:18:09 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:12.393 13:18:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.393 00:07:12.393 real 0m2.482s 00:07:12.393 user 0m2.280s 00:07:12.393 sys 0m0.209s 00:07:12.393 13:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.393 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.393 ************************************ 00:07:12.393 END TEST accel_comp 00:07:12.393 ************************************ 00:07:12.393 13:18:09 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:12.393 13:18:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:12.393 13:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.393 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.393 ************************************ 00:07:12.393 START TEST accel_decomp 00:07:12.393 ************************************ 00:07:12.393 13:18:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:12.393 13:18:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.393 13:18:09 -- accel/accel.sh@17 -- # local accel_module 00:07:12.393 13:18:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:12.393 13:18:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:12.393 13:18:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.393 13:18:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.393 13:18:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.393 13:18:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.393 13:18:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.393 13:18:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.393 13:18:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.393 13:18:09 -- accel/accel.sh@42 -- # jq -r . 00:07:12.393 [2024-07-26 13:18:09.823120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.393 [2024-07-26 13:18:09.823208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768223 ] 00:07:12.393 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.654 [2024-07-26 13:18:09.884057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.654 [2024-07-26 13:18:09.914307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.598 13:18:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:13.598 00:07:13.598 SPDK Configuration: 00:07:13.599 Core mask: 0x1 00:07:13.599 00:07:13.599 Accel Perf Configuration: 00:07:13.599 Workload Type: decompress 00:07:13.599 Transfer size: 4096 bytes 00:07:13.599 Vector count 1 00:07:13.599 Module: software 00:07:13.599 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.599 Queue depth: 32 00:07:13.599 Allocate depth: 32 00:07:13.599 # threads/core: 1 00:07:13.599 Run time: 1 seconds 00:07:13.599 Verify: Yes 00:07:13.599 00:07:13.599 Running for 1 seconds... 00:07:13.599 00:07:13.599 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.599 ------------------------------------------------------------------------------------ 00:07:13.599 0,0 62912/s 115 MiB/s 0 0 00:07:13.599 ==================================================================================== 00:07:13.599 Total 62912/s 245 MiB/s 0 0' 00:07:13.599 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.599 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.599 13:18:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.599 13:18:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.599 13:18:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.599 13:18:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.599 13:18:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.599 13:18:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.599 13:18:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.599 13:18:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.599 13:18:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.599 13:18:11 -- accel/accel.sh@42 -- # jq -r . 00:07:13.599 [2024-07-26 13:18:11.057399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:13.599 [2024-07-26 13:18:11.057491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768566 ] 00:07:13.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.861 [2024-07-26 13:18:11.118315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.861 [2024-07-26 13:18:11.146757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=0x1 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=decompress 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=software 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=32 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=32 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=1 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val=Yes 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.861 13:18:11 -- accel/accel.sh@21 -- # val= 00:07:13.861 13:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.861 13:18:11 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@21 -- # val= 00:07:14.805 13:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # IFS=: 00:07:14.805 13:18:12 -- accel/accel.sh@20 -- # read -r var val 00:07:14.805 13:18:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.805 13:18:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.805 13:18:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.805 00:07:14.805 real 0m2.470s 00:07:14.805 user 0m2.274s 00:07:14.805 sys 0m0.204s 00:07:14.805 13:18:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.805 13:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:14.805 ************************************ 00:07:14.805 END TEST accel_decomp 00:07:14.805 ************************************ 00:07:15.066 13:18:12 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.066 13:18:12 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:15.066 13:18:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.066 13:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.066 ************************************ 00:07:15.066 START TEST accel_decmop_full 00:07:15.066 ************************************ 00:07:15.066 13:18:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.066 13:18:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.066 13:18:12 -- accel/accel.sh@17 -- # local accel_module 00:07:15.066 13:18:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.066 13:18:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.066 13:18:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.066 13:18:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.066 13:18:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.066 13:18:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.066 13:18:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.067 13:18:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.067 13:18:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.067 13:18:12 -- accel/accel.sh@42 -- # jq -r . 00:07:15.067 [2024-07-26 13:18:12.337898] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:15.067 [2024-07-26 13:18:12.337972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768754 ] 00:07:15.067 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.067 [2024-07-26 13:18:12.401493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.067 [2024-07-26 13:18:12.432321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.454 13:18:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.454 00:07:16.454 SPDK Configuration: 00:07:16.454 Core mask: 0x1 00:07:16.454 00:07:16.454 Accel Perf Configuration: 00:07:16.454 Workload Type: decompress 00:07:16.454 Transfer size: 111250 bytes 00:07:16.454 Vector count 1 00:07:16.454 Module: software 00:07:16.454 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.454 Queue depth: 32 00:07:16.454 Allocate depth: 32 00:07:16.454 # threads/core: 1 00:07:16.454 Run time: 1 seconds 00:07:16.454 Verify: Yes 00:07:16.454 00:07:16.454 Running for 1 seconds... 00:07:16.454 00:07:16.454 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.454 ------------------------------------------------------------------------------------ 00:07:16.454 0,0 4096/s 169 MiB/s 0 0 00:07:16.454 ==================================================================================== 00:07:16.454 Total 4096/s 434 MiB/s 0 0' 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.454 13:18:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.454 13:18:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.454 13:18:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.454 13:18:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.454 13:18:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.454 13:18:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.454 13:18:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.454 13:18:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.454 13:18:13 -- accel/accel.sh@42 -- # jq -r . 00:07:16.454 [2024-07-26 13:18:13.586057] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.454 [2024-07-26 13:18:13.586144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769077 ] 00:07:16.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.454 [2024-07-26 13:18:13.646524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.454 [2024-07-26 13:18:13.674908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=0x1 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=decompress 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=software 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=32 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=32 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=1 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val=Yes 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.454 13:18:13 -- accel/accel.sh@21 -- # val= 00:07:16.454 13:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # IFS=: 00:07:16.454 13:18:13 -- accel/accel.sh@20 -- # read -r var val 00:07:17.398 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.398 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.398 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.398 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.399 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.399 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.399 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.399 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@21 -- # val= 00:07:17.399 13:18:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # IFS=: 00:07:17.399 13:18:14 -- accel/accel.sh@20 -- # read -r var val 00:07:17.399 13:18:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.399 13:18:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.399 13:18:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.399 00:07:17.399 real 0m2.497s 00:07:17.399 user 0m2.306s 00:07:17.399 sys 0m0.197s 00:07:17.399 13:18:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.399 13:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 ************************************ 00:07:17.399 END TEST accel_decmop_full 00:07:17.399 ************************************ 00:07:17.399 13:18:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:17.399 13:18:14 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:17.399 13:18:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.399 13:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 ************************************ 00:07:17.399 START TEST accel_decomp_mcore 00:07:17.399 ************************************ 00:07:17.399 13:18:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:17.399 13:18:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.399 13:18:14 -- accel/accel.sh@17 -- # local accel_module 00:07:17.399 13:18:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:17.399 13:18:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:17.399 13:18:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.399 13:18:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.399 13:18:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.399 13:18:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.399 13:18:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.399 13:18:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.399 13:18:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.399 13:18:14 -- accel/accel.sh@42 -- # jq -r . 00:07:17.660 [2024-07-26 13:18:14.875249] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:17.661 [2024-07-26 13:18:14.875339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769431 ] 00:07:17.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.661 [2024-07-26 13:18:14.936565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.661 [2024-07-26 13:18:14.968700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.661 [2024-07-26 13:18:14.968824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.661 [2024-07-26 13:18:14.968985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.661 [2024-07-26 13:18:14.968986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.161 13:18:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.161 00:07:19.161 SPDK Configuration: 00:07:19.161 Core mask: 0xf 00:07:19.161 00:07:19.161 Accel Perf Configuration: 00:07:19.161 Workload Type: decompress 00:07:19.161 Transfer size: 4096 bytes 00:07:19.161 Vector count 1 00:07:19.161 Module: software 00:07:19.161 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.161 Queue depth: 32 00:07:19.161 Allocate depth: 32 00:07:19.161 # threads/core: 1 00:07:19.161 Run time: 1 seconds 00:07:19.161 Verify: Yes 00:07:19.161 00:07:19.161 Running for 1 seconds... 00:07:19.161 00:07:19.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.161 ------------------------------------------------------------------------------------ 00:07:19.161 0,0 58624/s 108 MiB/s 0 0 00:07:19.161 3,0 58688/s 108 MiB/s 0 0 00:07:19.161 2,0 86368/s 159 MiB/s 0 0 00:07:19.161 1,0 58560/s 107 MiB/s 0 0 00:07:19.161 ==================================================================================== 00:07:19.161 Total 262240/s 1024 MiB/s 0 0' 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.161 13:18:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.161 13:18:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.161 13:18:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.161 13:18:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.161 13:18:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.161 13:18:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.161 13:18:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.161 13:18:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.161 13:18:16 -- accel/accel.sh@42 -- # jq -r . 00:07:19.161 [2024-07-26 13:18:16.116457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:19.161 [2024-07-26 13:18:16.116527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769663 ] 00:07:19.161 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.161 [2024-07-26 13:18:16.176865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.161 [2024-07-26 13:18:16.207011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.161 [2024-07-26 13:18:16.207128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.161 [2024-07-26 13:18:16.207286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.161 [2024-07-26 13:18:16.207286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=0xf 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=decompress 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=software 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=32 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=32 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=1 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val=Yes 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.161 13:18:16 -- accel/accel.sh@21 -- # val= 00:07:19.161 13:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # IFS=: 00:07:19.161 13:18:16 -- accel/accel.sh@20 -- # read -r var val 00:07:20.106 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.106 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.106 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.106 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.106 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.106 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.106 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.106 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@21 -- # val= 00:07:20.107 13:18:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # IFS=: 00:07:20.107 13:18:17 -- accel/accel.sh@20 -- # read -r var val 00:07:20.107 13:18:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.107 13:18:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.107 13:18:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.107 00:07:20.107 real 0m2.485s 00:07:20.107 user 0m8.760s 00:07:20.107 sys 0m0.202s 00:07:20.107 13:18:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.107 13:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:20.107 ************************************ 00:07:20.107 END TEST accel_decomp_mcore 00:07:20.107 ************************************ 00:07:20.107 13:18:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.107 13:18:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:20.107 13:18:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.107 13:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:20.107 ************************************ 00:07:20.107 START TEST accel_decomp_full_mcore 00:07:20.107 ************************************ 00:07:20.107 13:18:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.107 13:18:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.107 13:18:17 -- accel/accel.sh@17 -- # local accel_module 00:07:20.107 13:18:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.107 13:18:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:20.107 13:18:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.107 13:18:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.107 13:18:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.107 13:18:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.107 13:18:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.107 13:18:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.107 13:18:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.107 13:18:17 -- accel/accel.sh@42 -- # jq -r . 00:07:20.107 [2024-07-26 13:18:17.405622] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:20.107 [2024-07-26 13:18:17.405716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769836 ] 00:07:20.107 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.107 [2024-07-26 13:18:17.467149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.107 [2024-07-26 13:18:17.499517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.107 [2024-07-26 13:18:17.499634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.107 [2024-07-26 13:18:17.499794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.107 [2024-07-26 13:18:17.499795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.494 13:18:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.494 00:07:21.494 SPDK Configuration: 00:07:21.494 Core mask: 0xf 00:07:21.494 00:07:21.494 Accel Perf Configuration: 00:07:21.494 Workload Type: decompress 00:07:21.494 Transfer size: 111250 bytes 00:07:21.494 Vector count 1 00:07:21.494 Module: software 00:07:21.494 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.494 Queue depth: 32 00:07:21.494 Allocate depth: 32 00:07:21.494 # threads/core: 1 00:07:21.494 Run time: 1 seconds 00:07:21.494 Verify: Yes 00:07:21.494 00:07:21.494 Running for 1 seconds... 00:07:21.494 00:07:21.494 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.494 ------------------------------------------------------------------------------------ 00:07:21.494 0,0 4096/s 169 MiB/s 0 0 00:07:21.494 3,0 4096/s 169 MiB/s 0 0 00:07:21.494 2,0 5920/s 244 MiB/s 0 0 00:07:21.494 1,0 4096/s 169 MiB/s 0 0 00:07:21.494 ==================================================================================== 00:07:21.494 Total 18208/s 1931 MiB/s 0 0' 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.494 13:18:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.494 13:18:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.494 13:18:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.494 13:18:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.494 13:18:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.494 13:18:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.494 13:18:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.494 13:18:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.494 13:18:18 -- accel/accel.sh@42 -- # jq -r . 00:07:21.494 [2024-07-26 13:18:18.657262] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.494 [2024-07-26 13:18:18.657337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770148 ] 00:07:21.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.494 [2024-07-26 13:18:18.717659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.494 [2024-07-26 13:18:18.747707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.494 [2024-07-26 13:18:18.747822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.494 [2024-07-26 13:18:18.747978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.494 [2024-07-26 13:18:18.747979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=0xf 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=decompress 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=software 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=32 00:07:21.494 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.494 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.494 13:18:18 -- accel/accel.sh@21 -- # val=32 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.495 13:18:18 -- accel/accel.sh@21 -- # val=1 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.495 13:18:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.495 13:18:18 -- accel/accel.sh@21 -- # val=Yes 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.495 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.495 13:18:18 -- accel/accel.sh@21 -- # val= 00:07:21.495 13:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.495 13:18:18 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@21 -- # val= 00:07:22.437 13:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # IFS=: 00:07:22.437 13:18:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.437 13:18:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.437 13:18:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.437 13:18:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.437 00:07:22.437 real 0m2.508s 00:07:22.437 user 0m8.839s 00:07:22.437 sys 0m0.212s 00:07:22.437 13:18:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.437 13:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:22.437 ************************************ 00:07:22.437 END TEST accel_decomp_full_mcore 00:07:22.437 ************************************ 00:07:22.698 13:18:19 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.698 13:18:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:22.698 13:18:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.698 13:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:22.698 ************************************ 00:07:22.698 START TEST accel_decomp_mthread 00:07:22.698 ************************************ 00:07:22.698 13:18:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.698 13:18:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.698 13:18:19 -- accel/accel.sh@17 -- # local accel_module 00:07:22.698 13:18:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.698 13:18:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.698 13:18:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.698 13:18:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.698 13:18:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.698 13:18:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.698 13:18:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.698 13:18:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.698 13:18:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.698 13:18:19 -- accel/accel.sh@42 -- # jq -r . 00:07:22.698 [2024-07-26 13:18:19.957604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:22.698 [2024-07-26 13:18:19.957683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770500 ] 00:07:22.698 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.698 [2024-07-26 13:18:20.028821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.698 [2024-07-26 13:18:20.059168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.084 13:18:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.084 00:07:24.084 SPDK Configuration: 00:07:24.084 Core mask: 0x1 00:07:24.084 00:07:24.084 Accel Perf Configuration: 00:07:24.084 Workload Type: decompress 00:07:24.084 Transfer size: 4096 bytes 00:07:24.084 Vector count 1 00:07:24.084 Module: software 00:07:24.084 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.084 Queue depth: 32 00:07:24.084 Allocate depth: 32 00:07:24.084 # threads/core: 2 00:07:24.084 Run time: 1 seconds 00:07:24.084 Verify: Yes 00:07:24.084 00:07:24.084 Running for 1 seconds... 00:07:24.084 00:07:24.084 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.084 ------------------------------------------------------------------------------------ 00:07:24.084 0,1 31872/s 58 MiB/s 0 0 00:07:24.084 0,0 31808/s 58 MiB/s 0 0 00:07:24.084 ==================================================================================== 00:07:24.084 Total 63680/s 248 MiB/s 0 0' 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.084 13:18:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.084 13:18:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.084 13:18:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.084 13:18:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.084 13:18:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.084 13:18:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.084 13:18:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.084 13:18:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.084 13:18:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.084 13:18:21 -- accel/accel.sh@42 -- # jq -r . 00:07:24.084 [2024-07-26 13:18:21.206061] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:24.084 [2024-07-26 13:18:21.206159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770745 ] 00:07:24.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.084 [2024-07-26 13:18:21.265763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.084 [2024-07-26 13:18:21.294521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.084 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.084 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.084 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.084 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.084 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.084 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.084 13:18:21 -- accel/accel.sh@21 -- # val=0x1 00:07:24.084 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.084 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.084 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.084 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=decompress 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=software 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=32 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=32 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=2 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val=Yes 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:24.085 13:18:21 -- accel/accel.sh@21 -- # val= 00:07:24.085 13:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # IFS=: 00:07:24.085 13:18:21 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.029 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.029 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.029 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.030 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.030 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.030 13:18:22 -- accel/accel.sh@21 -- # val= 00:07:25.030 13:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.030 13:18:22 -- accel/accel.sh@20 -- # IFS=: 00:07:25.030 13:18:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.030 13:18:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.030 13:18:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:25.030 13:18:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.030 00:07:25.030 real 0m2.488s 00:07:25.030 user 0m2.280s 00:07:25.030 sys 0m0.217s 00:07:25.030 13:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.030 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.030 ************************************ 00:07:25.030 END TEST accel_decomp_mthread 00:07:25.030 ************************************ 00:07:25.030 13:18:22 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.030 13:18:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:25.030 13:18:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.030 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.030 ************************************ 00:07:25.030 START TEST accel_deomp_full_mthread 00:07:25.030 ************************************ 00:07:25.030 13:18:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.030 13:18:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.030 13:18:22 -- accel/accel.sh@17 -- # local accel_module 00:07:25.030 13:18:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.030 13:18:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.030 13:18:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.030 13:18:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.030 13:18:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.030 13:18:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.030 13:18:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.030 13:18:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.030 13:18:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.030 13:18:22 -- accel/accel.sh@42 -- # jq -r . 00:07:25.030 [2024-07-26 13:18:22.488326] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.030 [2024-07-26 13:18:22.488414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770909 ] 00:07:25.292 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.292 [2024-07-26 13:18:22.548811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.292 [2024-07-26 13:18:22.577902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.258 13:18:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.258 00:07:26.258 SPDK Configuration: 00:07:26.258 Core mask: 0x1 00:07:26.258 00:07:26.258 Accel Perf Configuration: 00:07:26.258 Workload Type: decompress 00:07:26.258 Transfer size: 111250 bytes 00:07:26.258 Vector count 1 00:07:26.258 Module: software 00:07:26.258 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.258 Queue depth: 32 00:07:26.258 Allocate depth: 32 00:07:26.258 # threads/core: 2 00:07:26.258 Run time: 1 seconds 00:07:26.258 Verify: Yes 00:07:26.258 00:07:26.258 Running for 1 seconds... 00:07:26.258 00:07:26.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.258 ------------------------------------------------------------------------------------ 00:07:26.258 0,1 2080/s 85 MiB/s 0 0 00:07:26.258 0,0 2080/s 85 MiB/s 0 0 00:07:26.258 ==================================================================================== 00:07:26.258 Total 4160/s 441 MiB/s 0 0' 00:07:26.258 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.258 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.258 13:18:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.258 13:18:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.258 13:18:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.258 13:18:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.258 13:18:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.258 13:18:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.259 13:18:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.259 13:18:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.259 13:18:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.259 13:18:23 -- accel/accel.sh@42 -- # jq -r . 00:07:26.521 [2024-07-26 13:18:23.750357] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:26.521 [2024-07-26 13:18:23.750429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771216 ] 00:07:26.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.521 [2024-07-26 13:18:23.809966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.521 [2024-07-26 13:18:23.838027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=0x1 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=decompress 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=software 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=32 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=32 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=2 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val=Yes 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:26.521 13:18:23 -- accel/accel.sh@21 -- # val= 00:07:26.521 13:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # IFS=: 00:07:26.521 13:18:23 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@21 -- # val= 00:07:27.909 13:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.909 13:18:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.909 13:18:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.909 13:18:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.909 13:18:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.909 00:07:27.909 real 0m2.527s 00:07:27.909 user 0m2.335s 00:07:27.909 sys 0m0.200s 00:07:27.909 13:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.909 13:18:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.909 ************************************ 00:07:27.909 END TEST accel_deomp_full_mthread 00:07:27.909 ************************************ 00:07:27.909 13:18:25 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:27.909 13:18:25 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.909 13:18:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:27.909 13:18:25 -- accel/accel.sh@129 -- # build_accel_config 00:07:27.909 13:18:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.909 13:18:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.909 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:27.909 13:18:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.909 13:18:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.909 13:18:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.909 13:18:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.909 13:18:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.909 13:18:25 -- accel/accel.sh@42 -- # jq -r . 00:07:27.909 ************************************ 00:07:27.909 START TEST accel_dif_functional_tests 00:07:27.909 ************************************ 00:07:27.909 13:18:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.909 [2024-07-26 13:18:25.078440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.909 [2024-07-26 13:18:25.078503] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771562 ] 00:07:27.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.909 [2024-07-26 13:18:25.138517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.909 [2024-07-26 13:18:25.171840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.909 [2024-07-26 13:18:25.171964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.909 [2024-07-26 13:18:25.171966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.909 00:07:27.909 00:07:27.909 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.909 http://cunit.sourceforge.net/ 00:07:27.909 00:07:27.909 00:07:27.909 Suite: accel_dif 00:07:27.909 Test: verify: DIF generated, GUARD check ...passed 00:07:27.909 Test: verify: DIF generated, APPTAG check ...passed 00:07:27.909 Test: verify: DIF generated, REFTAG check ...passed 00:07:27.909 Test: verify: DIF not generated, GUARD check ...[2024-07-26 13:18:25.221752] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.909 [2024-07-26 13:18:25.221792] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.909 passed 00:07:27.909 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 13:18:25.221821] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.909 [2024-07-26 13:18:25.221836] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.909 passed 00:07:27.909 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 13:18:25.221851] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.909 [2024-07-26 13:18:25.221864] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.909 passed 00:07:27.909 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:27.909 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 13:18:25.221904] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:27.909 passed 00:07:27.909 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:27.909 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:27.909 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:27.909 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 13:18:25.222011] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:27.909 passed 00:07:27.909 Test: generate copy: DIF generated, GUARD check ...passed 00:07:27.909 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:27.909 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:27.909 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:27.909 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:27.909 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:27.909 Test: generate copy: iovecs-len validate ...[2024-07-26 13:18:25.222196] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:27.909 passed 00:07:27.909 Test: generate copy: buffer alignment validate ...passed 00:07:27.909 00:07:27.909 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.909 suites 1 1 n/a 0 0 00:07:27.909 tests 20 20 20 0 0 00:07:27.909 asserts 204 204 204 0 n/a 00:07:27.909 00:07:27.909 Elapsed time = 0.002 seconds 00:07:27.909 00:07:27.909 real 0m0.290s 00:07:27.909 user 0m0.413s 00:07:27.909 sys 0m0.124s 00:07:27.909 13:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.909 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:27.909 ************************************ 00:07:27.909 END TEST accel_dif_functional_tests 00:07:27.909 ************************************ 00:07:27.909 00:07:27.909 real 0m52.540s 00:07:27.909 user 1m1.001s 00:07:27.909 sys 0m5.577s 00:07:27.909 13:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.909 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:27.909 ************************************ 00:07:27.909 END TEST accel 00:07:27.909 ************************************ 00:07:28.171 13:18:25 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:28.171 13:18:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.171 13:18:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.171 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.171 ************************************ 00:07:28.171 START TEST accel_rpc 00:07:28.171 ************************************ 00:07:28.171 13:18:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:28.171 * Looking for test storage... 00:07:28.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:28.171 13:18:25 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.171 13:18:25 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=771631 00:07:28.171 13:18:25 -- accel/accel_rpc.sh@15 -- # waitforlisten 771631 00:07:28.171 13:18:25 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:28.171 13:18:25 -- common/autotest_common.sh@819 -- # '[' -z 771631 ']' 00:07:28.171 13:18:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.171 13:18:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:28.171 13:18:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.171 13:18:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:28.171 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:07:28.171 [2024-07-26 13:18:25.553032] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:28.171 [2024-07-26 13:18:25.553105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771631 ] 00:07:28.171 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.171 [2024-07-26 13:18:25.619181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.433 [2024-07-26 13:18:25.655721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.433 [2024-07-26 13:18:25.655886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.006 13:18:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.006 13:18:26 -- common/autotest_common.sh@852 -- # return 0 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:29.006 13:18:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:29.006 13:18:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.006 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.006 ************************************ 00:07:29.006 START TEST accel_assign_opcode 00:07:29.006 ************************************ 00:07:29.006 13:18:26 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:29.006 13:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.006 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.006 [2024-07-26 13:18:26.333864] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:29.006 13:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:29.006 13:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.006 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.006 [2024-07-26 13:18:26.345888] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:29.006 13:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:29.006 13:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.006 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.006 13:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:29.006 13:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.006 13:18:26 -- accel/accel_rpc.sh@42 -- # grep software 00:07:29.006 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 13:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.268 software 00:07:29.268 00:07:29.268 real 0m0.193s 00:07:29.268 user 0m0.045s 00:07:29.268 sys 0m0.015s 00:07:29.268 13:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.268 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 ************************************ 00:07:29.268 END TEST accel_assign_opcode 00:07:29.268 ************************************ 00:07:29.268 13:18:26 -- accel/accel_rpc.sh@55 -- # killprocess 771631 00:07:29.268 13:18:26 -- common/autotest_common.sh@926 -- # '[' -z 771631 ']' 00:07:29.268 13:18:26 -- common/autotest_common.sh@930 -- # kill -0 771631 00:07:29.268 13:18:26 -- common/autotest_common.sh@931 -- # uname 00:07:29.268 13:18:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.268 13:18:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 771631 00:07:29.268 13:18:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.268 13:18:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.268 13:18:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 771631' 00:07:29.268 killing process with pid 771631 00:07:29.268 13:18:26 -- common/autotest_common.sh@945 -- # kill 771631 00:07:29.268 13:18:26 -- common/autotest_common.sh@950 -- # wait 771631 00:07:29.530 00:07:29.530 real 0m1.403s 00:07:29.530 user 0m1.492s 00:07:29.530 sys 0m0.382s 00:07:29.530 13:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.530 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.530 ************************************ 00:07:29.530 END TEST accel_rpc 00:07:29.530 ************************************ 00:07:29.530 13:18:26 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.530 13:18:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:29.530 13:18:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.530 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.530 ************************************ 00:07:29.530 START TEST app_cmdline 00:07:29.530 ************************************ 00:07:29.530 13:18:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.530 * Looking for test storage... 00:07:29.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.530 13:18:26 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.530 13:18:26 -- app/cmdline.sh@17 -- # spdk_tgt_pid=772037 00:07:29.530 13:18:26 -- app/cmdline.sh@18 -- # waitforlisten 772037 00:07:29.530 13:18:26 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.530 13:18:26 -- common/autotest_common.sh@819 -- # '[' -z 772037 ']' 00:07:29.530 13:18:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.530 13:18:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.530 13:18:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.530 13:18:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.530 13:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:29.530 [2024-07-26 13:18:27.000862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.530 [2024-07-26 13:18:27.000925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772037 ] 00:07:29.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.791 [2024-07-26 13:18:27.064748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.791 [2024-07-26 13:18:27.099319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.791 [2024-07-26 13:18:27.099458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.363 13:18:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:30.363 13:18:27 -- common/autotest_common.sh@852 -- # return 0 00:07:30.363 13:18:27 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:30.624 { 00:07:30.624 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:30.624 "fields": { 00:07:30.624 "major": 24, 00:07:30.624 "minor": 1, 00:07:30.624 "patch": 1, 00:07:30.624 "suffix": "-pre", 00:07:30.624 "commit": "dbef7efac" 00:07:30.624 } 00:07:30.624 } 00:07:30.624 13:18:27 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.624 13:18:27 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.624 13:18:27 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.624 13:18:27 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.624 13:18:27 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.624 13:18:27 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.624 13:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:30.624 13:18:27 -- app/cmdline.sh@26 -- # sort 00:07:30.624 13:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:30.624 13:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:30.624 13:18:27 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.624 13:18:27 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.624 13:18:27 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.624 13:18:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:30.624 13:18:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.624 13:18:27 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.624 13:18:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.624 13:18:27 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.624 13:18:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.624 13:18:27 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.624 13:18:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.624 13:18:27 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.624 13:18:27 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:30.624 13:18:27 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.886 request: 00:07:30.886 { 00:07:30.886 "method": "env_dpdk_get_mem_stats", 00:07:30.886 "req_id": 1 00:07:30.886 } 00:07:30.886 Got JSON-RPC error response 00:07:30.886 response: 00:07:30.886 { 00:07:30.886 "code": -32601, 00:07:30.886 "message": "Method not found" 00:07:30.886 } 00:07:30.886 13:18:28 -- common/autotest_common.sh@643 -- # es=1 00:07:30.886 13:18:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:30.886 13:18:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:30.886 13:18:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:30.886 13:18:28 -- app/cmdline.sh@1 -- # killprocess 772037 00:07:30.886 13:18:28 -- common/autotest_common.sh@926 -- # '[' -z 772037 ']' 00:07:30.886 13:18:28 -- common/autotest_common.sh@930 -- # kill -0 772037 00:07:30.886 13:18:28 -- common/autotest_common.sh@931 -- # uname 00:07:30.886 13:18:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:30.886 13:18:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 772037 00:07:30.886 13:18:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:30.886 13:18:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:30.886 13:18:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 772037' 00:07:30.886 killing process with pid 772037 00:07:30.886 13:18:28 -- common/autotest_common.sh@945 -- # kill 772037 00:07:30.886 13:18:28 -- common/autotest_common.sh@950 -- # wait 772037 00:07:31.147 00:07:31.147 real 0m1.532s 00:07:31.147 user 0m1.855s 00:07:31.147 sys 0m0.395s 00:07:31.147 13:18:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.147 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.147 ************************************ 00:07:31.147 END TEST app_cmdline 00:07:31.147 ************************************ 00:07:31.147 13:18:28 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:31.147 13:18:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:31.147 13:18:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.147 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.147 ************************************ 00:07:31.147 START TEST version 00:07:31.147 ************************************ 00:07:31.147 13:18:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:31.147 * Looking for test storage... 00:07:31.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:31.147 13:18:28 -- app/version.sh@17 -- # get_header_version major 00:07:31.147 13:18:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.147 13:18:28 -- app/version.sh@14 -- # cut -f2 00:07:31.147 13:18:28 -- app/version.sh@14 -- # tr -d '"' 00:07:31.147 13:18:28 -- app/version.sh@17 -- # major=24 00:07:31.147 13:18:28 -- app/version.sh@18 -- # get_header_version minor 00:07:31.147 13:18:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.147 13:18:28 -- app/version.sh@14 -- # cut -f2 00:07:31.147 13:18:28 -- app/version.sh@14 -- # tr -d '"' 00:07:31.147 13:18:28 -- app/version.sh@18 -- # minor=1 00:07:31.147 13:18:28 -- app/version.sh@19 -- # get_header_version patch 00:07:31.147 13:18:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.147 13:18:28 -- app/version.sh@14 -- # cut -f2 00:07:31.147 13:18:28 -- app/version.sh@14 -- # tr -d '"' 00:07:31.147 13:18:28 -- app/version.sh@19 -- # patch=1 00:07:31.147 13:18:28 -- app/version.sh@20 -- # get_header_version suffix 00:07:31.147 13:18:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:31.147 13:18:28 -- app/version.sh@14 -- # cut -f2 00:07:31.147 13:18:28 -- app/version.sh@14 -- # tr -d '"' 00:07:31.147 13:18:28 -- app/version.sh@20 -- # suffix=-pre 00:07:31.147 13:18:28 -- app/version.sh@22 -- # version=24.1 00:07:31.147 13:18:28 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.147 13:18:28 -- app/version.sh@25 -- # version=24.1.1 00:07:31.147 13:18:28 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:31.147 13:18:28 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:31.147 13:18:28 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.147 13:18:28 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:31.147 13:18:28 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:31.147 00:07:31.147 real 0m0.165s 00:07:31.147 user 0m0.091s 00:07:31.147 sys 0m0.113s 00:07:31.147 13:18:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.147 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.147 ************************************ 00:07:31.147 END TEST version 00:07:31.147 ************************************ 00:07:31.409 13:18:28 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@204 -- # uname -s 00:07:31.409 13:18:28 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:31.409 13:18:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:31.409 13:18:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:31.409 13:18:28 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:31.409 13:18:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:31.409 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.409 13:18:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:31.409 13:18:28 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:31.409 13:18:28 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.409 13:18:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:31.409 13:18:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.409 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.409 ************************************ 00:07:31.409 START TEST nvmf_tcp 00:07:31.409 ************************************ 00:07:31.409 13:18:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:31.409 * Looking for test storage... 00:07:31.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:31.409 13:18:28 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:31.409 13:18:28 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:31.409 13:18:28 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.409 13:18:28 -- nvmf/common.sh@7 -- # uname -s 00:07:31.409 13:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.409 13:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.409 13:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.409 13:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.409 13:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.409 13:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.409 13:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.409 13:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.409 13:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.409 13:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.409 13:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.410 13:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.410 13:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.410 13:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.410 13:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.410 13:18:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.410 13:18:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.410 13:18:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.410 13:18:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.410 13:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.410 13:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.410 13:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.410 13:18:28 -- paths/export.sh@5 -- # export PATH 00:07:31.410 13:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.410 13:18:28 -- nvmf/common.sh@46 -- # : 0 00:07:31.410 13:18:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:31.410 13:18:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:31.410 13:18:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:31.410 13:18:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.410 13:18:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.410 13:18:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:31.410 13:18:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:31.410 13:18:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:31.410 13:18:28 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:31.410 13:18:28 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:31.410 13:18:28 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:31.410 13:18:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:31.410 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.410 13:18:28 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:31.410 13:18:28 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:31.410 13:18:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:31.410 13:18:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.410 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.410 ************************************ 00:07:31.410 START TEST nvmf_example 00:07:31.410 ************************************ 00:07:31.410 13:18:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:31.671 * Looking for test storage... 00:07:31.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.671 13:18:28 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.671 13:18:28 -- nvmf/common.sh@7 -- # uname -s 00:07:31.671 13:18:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.671 13:18:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.671 13:18:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.671 13:18:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.671 13:18:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.671 13:18:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.671 13:18:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.671 13:18:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.671 13:18:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.671 13:18:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.671 13:18:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.671 13:18:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.671 13:18:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.671 13:18:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.671 13:18:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.671 13:18:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.671 13:18:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.671 13:18:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.671 13:18:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.671 13:18:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.671 13:18:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.672 13:18:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.672 13:18:28 -- paths/export.sh@5 -- # export PATH 00:07:31.672 13:18:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.672 13:18:28 -- nvmf/common.sh@46 -- # : 0 00:07:31.672 13:18:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:31.672 13:18:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:31.672 13:18:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:31.672 13:18:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.672 13:18:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.672 13:18:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:31.672 13:18:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:31.672 13:18:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:31.672 13:18:28 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:31.672 13:18:28 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:31.672 13:18:28 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:31.672 13:18:28 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:31.672 13:18:28 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:31.672 13:18:28 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:31.672 13:18:28 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:31.672 13:18:28 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:31.672 13:18:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:31.672 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:31.672 13:18:28 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:31.672 13:18:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:31.672 13:18:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.672 13:18:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:31.672 13:18:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:31.672 13:18:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:31.672 13:18:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.672 13:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.672 13:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.672 13:18:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:31.672 13:18:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:31.672 13:18:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:31.672 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:07:39.823 13:18:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:39.823 13:18:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:39.823 13:18:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:39.823 13:18:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:39.823 13:18:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:39.823 13:18:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:39.823 13:18:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:39.823 13:18:35 -- nvmf/common.sh@294 -- # net_devs=() 00:07:39.823 13:18:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:39.823 13:18:35 -- nvmf/common.sh@295 -- # e810=() 00:07:39.823 13:18:35 -- nvmf/common.sh@295 -- # local -ga e810 00:07:39.823 13:18:35 -- nvmf/common.sh@296 -- # x722=() 00:07:39.823 13:18:35 -- nvmf/common.sh@296 -- # local -ga x722 00:07:39.823 13:18:35 -- nvmf/common.sh@297 -- # mlx=() 00:07:39.823 13:18:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:39.823 13:18:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.823 13:18:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:39.823 13:18:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:39.823 13:18:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:39.823 13:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.823 13:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:39.823 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:39.823 13:18:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.823 13:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:39.823 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:39.823 13:18:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:39.823 13:18:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:39.823 13:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.824 13:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.824 13:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.824 13:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.824 13:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:39.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:39.824 13:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.824 13:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.824 13:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.824 13:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.824 13:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.824 13:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:39.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:39.824 13:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.824 13:18:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:39.824 13:18:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:39.824 13:18:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:39.824 13:18:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:39.824 13:18:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:39.824 13:18:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.824 13:18:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.824 13:18:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.824 13:18:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:39.824 13:18:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.824 13:18:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.824 13:18:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:39.824 13:18:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.824 13:18:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.824 13:18:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:39.824 13:18:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:39.824 13:18:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.824 13:18:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.824 13:18:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.824 13:18:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.824 13:18:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:39.824 13:18:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.824 13:18:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.824 13:18:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.824 13:18:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:39.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:07:39.824 00:07:39.824 --- 10.0.0.2 ping statistics --- 00:07:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.824 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:07:39.824 13:18:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:07:39.824 00:07:39.824 --- 10.0.0.1 ping statistics --- 00:07:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.824 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:07:39.824 13:18:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.824 13:18:36 -- nvmf/common.sh@410 -- # return 0 00:07:39.824 13:18:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:39.824 13:18:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.824 13:18:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:39.824 13:18:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:39.824 13:18:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.824 13:18:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:39.824 13:18:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:39.824 13:18:36 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:39.824 13:18:36 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:39.824 13:18:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:39.824 13:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:36 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:39.824 13:18:36 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:39.824 13:18:36 -- target/nvmf_example.sh@34 -- # nvmfpid=776214 00:07:39.824 13:18:36 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.824 13:18:36 -- target/nvmf_example.sh@36 -- # waitforlisten 776214 00:07:39.824 13:18:36 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:39.824 13:18:36 -- common/autotest_common.sh@819 -- # '[' -z 776214 ']' 00:07:39.824 13:18:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.824 13:18:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:39.824 13:18:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.824 13:18:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:39.824 13:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.824 13:18:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.824 13:18:37 -- common/autotest_common.sh@852 -- # return 0 00:07:39.824 13:18:37 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:39.824 13:18:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.824 13:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.824 13:18:37 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:39.824 13:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.824 13:18:37 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:39.824 13:18:37 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.824 13:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.824 13:18:37 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:39.824 13:18:37 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.824 13:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.824 13:18:37 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.824 13:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.824 13:18:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 13:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.824 13:18:37 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:39.824 13:18:37 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:39.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.131 Initializing NVMe Controllers 00:07:52.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:52.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:52.131 Initialization complete. Launching workers. 00:07:52.131 ======================================================== 00:07:52.131 Latency(us) 00:07:52.131 Device Information : IOPS MiB/s Average min max 00:07:52.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15101.90 58.99 4238.97 877.54 16059.47 00:07:52.131 ======================================================== 00:07:52.131 Total : 15101.90 58.99 4238.97 877.54 16059.47 00:07:52.131 00:07:52.131 13:18:47 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:52.131 13:18:47 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:52.131 13:18:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:52.131 13:18:47 -- nvmf/common.sh@116 -- # sync 00:07:52.131 13:18:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:52.131 13:18:47 -- nvmf/common.sh@119 -- # set +e 00:07:52.131 13:18:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:52.131 13:18:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:52.131 rmmod nvme_tcp 00:07:52.131 rmmod nvme_fabrics 00:07:52.131 rmmod nvme_keyring 00:07:52.131 13:18:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:52.131 13:18:47 -- nvmf/common.sh@123 -- # set -e 00:07:52.131 13:18:47 -- nvmf/common.sh@124 -- # return 0 00:07:52.131 13:18:47 -- nvmf/common.sh@477 -- # '[' -n 776214 ']' 00:07:52.131 13:18:47 -- nvmf/common.sh@478 -- # killprocess 776214 00:07:52.131 13:18:47 -- common/autotest_common.sh@926 -- # '[' -z 776214 ']' 00:07:52.131 13:18:47 -- common/autotest_common.sh@930 -- # kill -0 776214 00:07:52.131 13:18:47 -- common/autotest_common.sh@931 -- # uname 00:07:52.131 13:18:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:52.131 13:18:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 776214 00:07:52.131 13:18:47 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:52.131 13:18:47 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:52.131 13:18:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 776214' 00:07:52.131 killing process with pid 776214 00:07:52.131 13:18:47 -- common/autotest_common.sh@945 -- # kill 776214 00:07:52.131 13:18:47 -- common/autotest_common.sh@950 -- # wait 776214 00:07:52.131 nvmf threads initialize successfully 00:07:52.131 bdev subsystem init successfully 00:07:52.131 created a nvmf target service 00:07:52.131 create targets's poll groups done 00:07:52.131 all subsystems of target started 00:07:52.131 nvmf target is running 00:07:52.131 all subsystems of target stopped 00:07:52.131 destroy targets's poll groups done 00:07:52.131 destroyed the nvmf target service 00:07:52.131 bdev subsystem finish successfully 00:07:52.131 nvmf threads destroy successfully 00:07:52.131 13:18:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:52.131 13:18:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:52.131 13:18:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:52.131 13:18:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.131 13:18:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:52.131 13:18:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.131 13:18:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.131 13:18:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.393 13:18:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:52.393 13:18:49 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:52.393 13:18:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:52.393 13:18:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.659 00:07:52.659 real 0m21.056s 00:07:52.659 user 0m46.834s 00:07:52.659 sys 0m6.473s 00:07:52.659 13:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.659 13:18:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.659 ************************************ 00:07:52.659 END TEST nvmf_example 00:07:52.659 ************************************ 00:07:52.659 13:18:49 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:52.659 13:18:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:52.659 13:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.659 13:18:49 -- common/autotest_common.sh@10 -- # set +x 00:07:52.659 ************************************ 00:07:52.659 START TEST nvmf_filesystem 00:07:52.659 ************************************ 00:07:52.659 13:18:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:52.659 * Looking for test storage... 00:07:52.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.659 13:18:50 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:52.659 13:18:50 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:52.659 13:18:50 -- common/autotest_common.sh@34 -- # set -e 00:07:52.659 13:18:50 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:52.659 13:18:50 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:52.659 13:18:50 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:52.659 13:18:50 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:52.659 13:18:50 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:52.659 13:18:50 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:52.659 13:18:50 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:52.659 13:18:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:52.659 13:18:50 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:52.659 13:18:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:52.659 13:18:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:52.659 13:18:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:52.659 13:18:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:52.659 13:18:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:52.659 13:18:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:52.659 13:18:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:52.659 13:18:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:52.659 13:18:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:52.659 13:18:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:52.659 13:18:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:52.659 13:18:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:52.659 13:18:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:52.659 13:18:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:52.659 13:18:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:52.659 13:18:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:52.659 13:18:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:52.659 13:18:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:52.659 13:18:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:52.659 13:18:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:52.659 13:18:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:52.659 13:18:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:52.659 13:18:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:52.659 13:18:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:52.659 13:18:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:52.659 13:18:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:52.659 13:18:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:52.659 13:18:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:52.659 13:18:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:52.659 13:18:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:52.659 13:18:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.659 13:18:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:52.659 13:18:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:52.659 13:18:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:52.659 13:18:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:52.659 13:18:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:52.659 13:18:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:52.659 13:18:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:52.659 13:18:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:52.659 13:18:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:52.659 13:18:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:52.659 13:18:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:52.659 13:18:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:52.659 13:18:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:52.659 13:18:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:52.659 13:18:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:52.659 13:18:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:52.659 13:18:50 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:52.659 13:18:50 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:52.659 13:18:50 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:52.659 13:18:50 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:52.659 13:18:50 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:52.659 13:18:50 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:52.659 13:18:50 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:52.659 13:18:50 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:52.659 13:18:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.659 13:18:50 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:52.659 13:18:50 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:52.659 13:18:50 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:52.659 13:18:50 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:52.659 13:18:50 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:52.659 13:18:50 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:52.659 13:18:50 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:52.659 13:18:50 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:52.659 13:18:50 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:52.659 13:18:50 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:52.659 13:18:50 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:52.659 13:18:50 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:52.659 13:18:50 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:52.659 13:18:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:52.659 13:18:50 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:52.659 13:18:50 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:52.659 13:18:50 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:52.659 13:18:50 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:52.659 13:18:50 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:52.659 13:18:50 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:52.659 13:18:50 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:52.659 13:18:50 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:52.659 13:18:50 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.659 13:18:50 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.660 13:18:50 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:52.660 13:18:50 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.660 13:18:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:52.660 13:18:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:52.660 13:18:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:52.660 13:18:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:52.660 13:18:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:52.660 13:18:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:52.660 13:18:50 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:52.660 13:18:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:52.660 #define SPDK_CONFIG_H 00:07:52.660 #define SPDK_CONFIG_APPS 1 00:07:52.660 #define SPDK_CONFIG_ARCH native 00:07:52.660 #undef SPDK_CONFIG_ASAN 00:07:52.660 #undef SPDK_CONFIG_AVAHI 00:07:52.660 #undef SPDK_CONFIG_CET 00:07:52.660 #define SPDK_CONFIG_COVERAGE 1 00:07:52.660 #define SPDK_CONFIG_CROSS_PREFIX 00:07:52.660 #undef SPDK_CONFIG_CRYPTO 00:07:52.660 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:52.660 #undef SPDK_CONFIG_CUSTOMOCF 00:07:52.660 #undef SPDK_CONFIG_DAOS 00:07:52.660 #define SPDK_CONFIG_DAOS_DIR 00:07:52.660 #define SPDK_CONFIG_DEBUG 1 00:07:52.660 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:52.660 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.660 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:52.660 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.660 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:52.660 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:52.660 #define SPDK_CONFIG_EXAMPLES 1 00:07:52.660 #undef SPDK_CONFIG_FC 00:07:52.660 #define SPDK_CONFIG_FC_PATH 00:07:52.660 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:52.660 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:52.660 #undef SPDK_CONFIG_FUSE 00:07:52.660 #undef SPDK_CONFIG_FUZZER 00:07:52.660 #define SPDK_CONFIG_FUZZER_LIB 00:07:52.660 #undef SPDK_CONFIG_GOLANG 00:07:52.660 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:52.660 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:52.660 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:52.660 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:52.660 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:52.660 #define SPDK_CONFIG_IDXD 1 00:07:52.660 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:52.660 #undef SPDK_CONFIG_IPSEC_MB 00:07:52.660 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:52.660 #define SPDK_CONFIG_ISAL 1 00:07:52.660 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:52.660 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:52.660 #define SPDK_CONFIG_LIBDIR 00:07:52.660 #undef SPDK_CONFIG_LTO 00:07:52.660 #define SPDK_CONFIG_MAX_LCORES 00:07:52.660 #define SPDK_CONFIG_NVME_CUSE 1 00:07:52.660 #undef SPDK_CONFIG_OCF 00:07:52.660 #define SPDK_CONFIG_OCF_PATH 00:07:52.660 #define SPDK_CONFIG_OPENSSL_PATH 00:07:52.660 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:52.660 #undef SPDK_CONFIG_PGO_USE 00:07:52.660 #define SPDK_CONFIG_PREFIX /usr/local 00:07:52.660 #undef SPDK_CONFIG_RAID5F 00:07:52.660 #undef SPDK_CONFIG_RBD 00:07:52.660 #define SPDK_CONFIG_RDMA 1 00:07:52.660 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:52.660 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:52.660 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:52.660 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:52.660 #define SPDK_CONFIG_SHARED 1 00:07:52.660 #undef SPDK_CONFIG_SMA 00:07:52.660 #define SPDK_CONFIG_TESTS 1 00:07:52.660 #undef SPDK_CONFIG_TSAN 00:07:52.660 #define SPDK_CONFIG_UBLK 1 00:07:52.660 #define SPDK_CONFIG_UBSAN 1 00:07:52.660 #undef SPDK_CONFIG_UNIT_TESTS 00:07:52.660 #undef SPDK_CONFIG_URING 00:07:52.660 #define SPDK_CONFIG_URING_PATH 00:07:52.660 #undef SPDK_CONFIG_URING_ZNS 00:07:52.660 #undef SPDK_CONFIG_USDT 00:07:52.660 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:52.660 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:52.660 #define SPDK_CONFIG_VFIO_USER 1 00:07:52.660 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:52.660 #define SPDK_CONFIG_VHOST 1 00:07:52.660 #define SPDK_CONFIG_VIRTIO 1 00:07:52.660 #undef SPDK_CONFIG_VTUNE 00:07:52.660 #define SPDK_CONFIG_VTUNE_DIR 00:07:52.660 #define SPDK_CONFIG_WERROR 1 00:07:52.660 #define SPDK_CONFIG_WPDK_DIR 00:07:52.660 #undef SPDK_CONFIG_XNVME 00:07:52.660 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:52.660 13:18:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:52.660 13:18:50 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.660 13:18:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.660 13:18:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.660 13:18:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.660 13:18:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.660 13:18:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.660 13:18:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.660 13:18:50 -- paths/export.sh@5 -- # export PATH 00:07:52.660 13:18:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.660 13:18:50 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:52.660 13:18:50 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:52.660 13:18:50 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:52.660 13:18:50 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:52.660 13:18:50 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:52.660 13:18:50 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.660 13:18:50 -- pm/common@16 -- # TEST_TAG=N/A 00:07:52.660 13:18:50 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:52.660 13:18:50 -- common/autotest_common.sh@52 -- # : 1 00:07:52.660 13:18:50 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:52.660 13:18:50 -- common/autotest_common.sh@56 -- # : 0 00:07:52.660 13:18:50 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:52.660 13:18:50 -- common/autotest_common.sh@58 -- # : 0 00:07:52.660 13:18:50 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:52.660 13:18:50 -- common/autotest_common.sh@60 -- # : 1 00:07:52.660 13:18:50 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:52.660 13:18:50 -- common/autotest_common.sh@62 -- # : 0 00:07:52.660 13:18:50 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:52.660 13:18:50 -- common/autotest_common.sh@64 -- # : 00:07:52.660 13:18:50 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:52.660 13:18:50 -- common/autotest_common.sh@66 -- # : 0 00:07:52.660 13:18:50 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:52.660 13:18:50 -- common/autotest_common.sh@68 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:52.661 13:18:50 -- common/autotest_common.sh@70 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:52.661 13:18:50 -- common/autotest_common.sh@72 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:52.661 13:18:50 -- common/autotest_common.sh@74 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:52.661 13:18:50 -- common/autotest_common.sh@76 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:52.661 13:18:50 -- common/autotest_common.sh@78 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:52.661 13:18:50 -- common/autotest_common.sh@80 -- # : 1 00:07:52.661 13:18:50 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:52.661 13:18:50 -- common/autotest_common.sh@82 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:52.661 13:18:50 -- common/autotest_common.sh@84 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:52.661 13:18:50 -- common/autotest_common.sh@86 -- # : 1 00:07:52.661 13:18:50 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:52.661 13:18:50 -- common/autotest_common.sh@88 -- # : 1 00:07:52.661 13:18:50 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:52.661 13:18:50 -- common/autotest_common.sh@90 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:52.661 13:18:50 -- common/autotest_common.sh@92 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:52.661 13:18:50 -- common/autotest_common.sh@94 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:52.661 13:18:50 -- common/autotest_common.sh@96 -- # : tcp 00:07:52.661 13:18:50 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:52.661 13:18:50 -- common/autotest_common.sh@98 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:52.661 13:18:50 -- common/autotest_common.sh@100 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:52.661 13:18:50 -- common/autotest_common.sh@102 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:52.661 13:18:50 -- common/autotest_common.sh@104 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:52.661 13:18:50 -- common/autotest_common.sh@106 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:52.661 13:18:50 -- common/autotest_common.sh@108 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:52.661 13:18:50 -- common/autotest_common.sh@110 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:52.661 13:18:50 -- common/autotest_common.sh@112 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:52.661 13:18:50 -- common/autotest_common.sh@114 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:52.661 13:18:50 -- common/autotest_common.sh@116 -- # : 1 00:07:52.661 13:18:50 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:52.661 13:18:50 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:52.661 13:18:50 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:52.661 13:18:50 -- common/autotest_common.sh@120 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:52.661 13:18:50 -- common/autotest_common.sh@122 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:52.661 13:18:50 -- common/autotest_common.sh@124 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:52.661 13:18:50 -- common/autotest_common.sh@126 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:52.661 13:18:50 -- common/autotest_common.sh@128 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:52.661 13:18:50 -- common/autotest_common.sh@130 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:52.661 13:18:50 -- common/autotest_common.sh@132 -- # : v23.11 00:07:52.661 13:18:50 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:52.661 13:18:50 -- common/autotest_common.sh@134 -- # : true 00:07:52.661 13:18:50 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:52.661 13:18:50 -- common/autotest_common.sh@136 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:52.661 13:18:50 -- common/autotest_common.sh@138 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:52.661 13:18:50 -- common/autotest_common.sh@140 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:52.661 13:18:50 -- common/autotest_common.sh@142 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:52.661 13:18:50 -- common/autotest_common.sh@144 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:52.661 13:18:50 -- common/autotest_common.sh@146 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:52.661 13:18:50 -- common/autotest_common.sh@148 -- # : e810 00:07:52.661 13:18:50 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:52.661 13:18:50 -- common/autotest_common.sh@150 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:52.661 13:18:50 -- common/autotest_common.sh@152 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:52.661 13:18:50 -- common/autotest_common.sh@154 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:52.661 13:18:50 -- common/autotest_common.sh@156 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:52.661 13:18:50 -- common/autotest_common.sh@158 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:52.661 13:18:50 -- common/autotest_common.sh@160 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:52.661 13:18:50 -- common/autotest_common.sh@163 -- # : 00:07:52.661 13:18:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:52.661 13:18:50 -- common/autotest_common.sh@165 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:52.661 13:18:50 -- common/autotest_common.sh@167 -- # : 0 00:07:52.661 13:18:50 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:52.661 13:18:50 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.661 13:18:50 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.662 13:18:50 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:52.662 13:18:50 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:52.662 13:18:50 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:52.662 13:18:50 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:52.662 13:18:50 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:52.662 13:18:50 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:52.662 13:18:50 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:52.662 13:18:50 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:52.662 13:18:50 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:52.662 13:18:50 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:52.662 13:18:50 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:52.662 13:18:50 -- common/autotest_common.sh@196 -- # cat 00:07:52.662 13:18:50 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:52.662 13:18:50 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:52.662 13:18:50 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:52.662 13:18:50 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:52.662 13:18:50 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:52.662 13:18:50 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:52.662 13:18:50 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:52.662 13:18:50 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.662 13:18:50 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:52.662 13:18:50 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.662 13:18:50 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:52.662 13:18:50 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:52.662 13:18:50 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:52.662 13:18:50 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:52.662 13:18:50 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:52.662 13:18:50 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:52.662 13:18:50 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:52.662 13:18:50 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:52.662 13:18:50 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:52.662 13:18:50 -- common/autotest_common.sh@249 -- # valgrind= 00:07:52.662 13:18:50 -- common/autotest_common.sh@255 -- # uname -s 00:07:52.662 13:18:50 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:52.662 13:18:50 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:52.662 13:18:50 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:52.662 13:18:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:52.662 13:18:50 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:52.662 13:18:50 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:52.662 13:18:50 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:52.662 13:18:50 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:52.662 13:18:50 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:52.662 13:18:50 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:52.662 13:18:50 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:52.662 13:18:50 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:52.662 13:18:50 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:52.662 13:18:50 -- common/autotest_common.sh@309 -- # [[ -z 779151 ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@309 -- # kill -0 779151 00:07:52.662 13:18:50 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:52.662 13:18:50 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:52.662 13:18:50 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:52.662 13:18:50 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:52.662 13:18:50 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:52.662 13:18:50 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:52.662 13:18:50 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:52.662 13:18:50 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.0z7X4z 00:07:52.662 13:18:50 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:52.662 13:18:50 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:52.662 13:18:50 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0z7X4z/tests/target /tmp/spdk.0z7X4z 00:07:52.662 13:18:50 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:52.662 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.662 13:18:50 -- common/autotest_common.sh@318 -- # df -T 00:07:52.662 13:18:50 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:52.662 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:52.662 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=954236928 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:52.662 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330192896 00:07:52.662 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=117053157376 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370976256 00:07:52.662 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=12317818880 00:07:52.662 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=64631967744 00:07:52.662 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685486080 00:07:52.662 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:52.662 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:52.662 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864499200 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874198528 00:07:52.663 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=9699328 00:07:52.663 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=216064 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:52.663 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=287744 00:07:52.663 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=64683536384 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:52.663 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=1953792 00:07:52.663 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:07:52.663 13:18:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:07:52.663 13:18:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:52.663 13:18:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:52.663 13:18:50 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:52.663 * Looking for test storage... 00:07:52.663 13:18:50 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:52.663 13:18:50 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:52.663 13:18:50 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.663 13:18:50 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:52.663 13:18:50 -- common/autotest_common.sh@363 -- # mount=/ 00:07:52.663 13:18:50 -- common/autotest_common.sh@365 -- # target_space=117053157376 00:07:52.663 13:18:50 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:52.663 13:18:50 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:52.663 13:18:50 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:52.663 13:18:50 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:52.663 13:18:50 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:52.663 13:18:50 -- common/autotest_common.sh@372 -- # new_size=14532411392 00:07:52.663 13:18:50 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:52.663 13:18:50 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.663 13:18:50 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.663 13:18:50 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.663 13:18:50 -- common/autotest_common.sh@380 -- # return 0 00:07:52.663 13:18:50 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:52.663 13:18:50 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:52.663 13:18:50 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:52.663 13:18:50 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:52.663 13:18:50 -- common/autotest_common.sh@1672 -- # true 00:07:52.663 13:18:50 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:52.663 13:18:50 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:52.663 13:18:50 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:52.663 13:18:50 -- common/autotest_common.sh@27 -- # exec 00:07:52.663 13:18:50 -- common/autotest_common.sh@29 -- # exec 00:07:52.663 13:18:50 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:52.663 13:18:50 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:52.663 13:18:50 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:52.663 13:18:50 -- common/autotest_common.sh@18 -- # set -x 00:07:52.663 13:18:50 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.663 13:18:50 -- nvmf/common.sh@7 -- # uname -s 00:07:52.663 13:18:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.663 13:18:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.663 13:18:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.663 13:18:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.663 13:18:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.663 13:18:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.663 13:18:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.663 13:18:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.663 13:18:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.663 13:18:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.925 13:18:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.925 13:18:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.925 13:18:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.925 13:18:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.925 13:18:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.925 13:18:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.925 13:18:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.925 13:18:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.925 13:18:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.925 13:18:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.926 13:18:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.926 13:18:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.926 13:18:50 -- paths/export.sh@5 -- # export PATH 00:07:52.926 13:18:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.926 13:18:50 -- nvmf/common.sh@46 -- # : 0 00:07:52.926 13:18:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:52.926 13:18:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:52.926 13:18:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:52.926 13:18:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.926 13:18:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.926 13:18:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:52.926 13:18:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:52.926 13:18:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:52.926 13:18:50 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:52.926 13:18:50 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:52.926 13:18:50 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:52.926 13:18:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:52.926 13:18:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.926 13:18:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:52.926 13:18:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:52.926 13:18:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:52.926 13:18:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.926 13:18:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.926 13:18:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.926 13:18:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:52.926 13:18:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:52.926 13:18:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:52.926 13:18:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.573 13:18:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:59.573 13:18:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:59.573 13:18:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:59.573 13:18:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:59.573 13:18:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:59.573 13:18:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:59.573 13:18:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:59.573 13:18:56 -- nvmf/common.sh@294 -- # net_devs=() 00:07:59.573 13:18:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:59.573 13:18:56 -- nvmf/common.sh@295 -- # e810=() 00:07:59.573 13:18:56 -- nvmf/common.sh@295 -- # local -ga e810 00:07:59.573 13:18:56 -- nvmf/common.sh@296 -- # x722=() 00:07:59.573 13:18:56 -- nvmf/common.sh@296 -- # local -ga x722 00:07:59.573 13:18:56 -- nvmf/common.sh@297 -- # mlx=() 00:07:59.573 13:18:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:59.573 13:18:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.573 13:18:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.574 13:18:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.574 13:18:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:59.574 13:18:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:59.574 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:59.574 13:18:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:59.574 13:18:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:59.574 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:59.574 13:18:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:59.574 13:18:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.574 13:18:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.574 13:18:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:59.574 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:59.574 13:18:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:59.574 13:18:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.574 13:18:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.574 13:18:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:59.574 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:59.574 13:18:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:59.574 13:18:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:59.574 13:18:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.574 13:18:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.574 13:18:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:59.574 13:18:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.574 13:18:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.574 13:18:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:59.574 13:18:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.574 13:18:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.574 13:18:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:59.574 13:18:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:59.574 13:18:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.574 13:18:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.574 13:18:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.574 13:18:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.574 13:18:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:59.574 13:18:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.574 13:18:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.574 13:18:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.574 13:18:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:59.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.757 ms 00:07:59.574 00:07:59.574 --- 10.0.0.2 ping statistics --- 00:07:59.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.574 rtt min/avg/max/mdev = 0.757/0.757/0.757/0.000 ms 00:07:59.574 13:18:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:07:59.574 00:07:59.574 --- 10.0.0.1 ping statistics --- 00:07:59.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.574 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:07:59.574 13:18:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.574 13:18:56 -- nvmf/common.sh@410 -- # return 0 00:07:59.574 13:18:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:59.574 13:18:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.574 13:18:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:59.574 13:18:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.574 13:18:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:59.574 13:18:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:59.574 13:18:56 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:59.574 13:18:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:59.574 13:18:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.574 13:18:56 -- common/autotest_common.sh@10 -- # set +x 00:07:59.574 ************************************ 00:07:59.574 START TEST nvmf_filesystem_no_in_capsule 00:07:59.574 ************************************ 00:07:59.574 13:18:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:59.574 13:18:56 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:59.574 13:18:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.574 13:18:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:59.574 13:18:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.574 13:18:56 -- common/autotest_common.sh@10 -- # set +x 00:07:59.574 13:18:56 -- nvmf/common.sh@469 -- # nvmfpid=782783 00:07:59.574 13:18:56 -- nvmf/common.sh@470 -- # waitforlisten 782783 00:07:59.574 13:18:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.574 13:18:56 -- common/autotest_common.sh@819 -- # '[' -z 782783 ']' 00:07:59.574 13:18:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.574 13:18:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.574 13:18:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.574 13:18:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.574 13:18:56 -- common/autotest_common.sh@10 -- # set +x 00:07:59.574 [2024-07-26 13:18:57.022346] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:59.574 [2024-07-26 13:18:57.022413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.836 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.836 [2024-07-26 13:18:57.093037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.836 [2024-07-26 13:18:57.132845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.836 [2024-07-26 13:18:57.132991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.836 [2024-07-26 13:18:57.133001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.836 [2024-07-26 13:18:57.133014] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.836 [2024-07-26 13:18:57.133194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.836 [2024-07-26 13:18:57.133342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.836 [2024-07-26 13:18:57.133588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.836 [2024-07-26 13:18:57.133590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.409 13:18:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.409 13:18:57 -- common/autotest_common.sh@852 -- # return 0 00:08:00.409 13:18:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:00.409 13:18:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:00.409 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.409 13:18:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.409 13:18:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.409 13:18:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:00.409 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.409 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.409 [2024-07-26 13:18:57.844565] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.409 13:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.409 13:18:57 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.409 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.409 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.671 Malloc1 00:08:00.671 13:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.671 13:18:57 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.671 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.671 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.671 13:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.671 13:18:57 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.671 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.671 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.671 13:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.671 13:18:57 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.671 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.671 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.671 [2024-07-26 13:18:57.976448] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.671 13:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.671 13:18:57 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.671 13:18:57 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:00.671 13:18:57 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:00.671 13:18:57 -- common/autotest_common.sh@1359 -- # local bs 00:08:00.671 13:18:57 -- common/autotest_common.sh@1360 -- # local nb 00:08:00.671 13:18:57 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.671 13:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.671 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:00.671 13:18:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.671 13:18:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:00.671 { 00:08:00.671 "name": "Malloc1", 00:08:00.671 "aliases": [ 00:08:00.671 "e9a95149-22bb-43d7-9897-faf8b8768ec8" 00:08:00.671 ], 00:08:00.671 "product_name": "Malloc disk", 00:08:00.671 "block_size": 512, 00:08:00.671 "num_blocks": 1048576, 00:08:00.671 "uuid": "e9a95149-22bb-43d7-9897-faf8b8768ec8", 00:08:00.671 "assigned_rate_limits": { 00:08:00.671 "rw_ios_per_sec": 0, 00:08:00.671 "rw_mbytes_per_sec": 0, 00:08:00.671 "r_mbytes_per_sec": 0, 00:08:00.671 "w_mbytes_per_sec": 0 00:08:00.671 }, 00:08:00.671 "claimed": true, 00:08:00.671 "claim_type": "exclusive_write", 00:08:00.671 "zoned": false, 00:08:00.671 "supported_io_types": { 00:08:00.671 "read": true, 00:08:00.671 "write": true, 00:08:00.671 "unmap": true, 00:08:00.671 "write_zeroes": true, 00:08:00.671 "flush": true, 00:08:00.671 "reset": true, 00:08:00.671 "compare": false, 00:08:00.671 "compare_and_write": false, 00:08:00.671 "abort": true, 00:08:00.671 "nvme_admin": false, 00:08:00.671 "nvme_io": false 00:08:00.671 }, 00:08:00.671 "memory_domains": [ 00:08:00.671 { 00:08:00.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.671 "dma_device_type": 2 00:08:00.671 } 00:08:00.671 ], 00:08:00.671 "driver_specific": {} 00:08:00.671 } 00:08:00.671 ]' 00:08:00.671 13:18:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:00.671 13:18:58 -- common/autotest_common.sh@1362 -- # bs=512 00:08:00.671 13:18:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:00.671 13:18:58 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:00.671 13:18:58 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:00.671 13:18:58 -- common/autotest_common.sh@1367 -- # echo 512 00:08:00.671 13:18:58 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.671 13:18:58 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.589 13:18:59 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.589 13:18:59 -- common/autotest_common.sh@1177 -- # local i=0 00:08:02.589 13:18:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.589 13:18:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:02.589 13:18:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:04.503 13:19:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:04.503 13:19:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:04.503 13:19:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.503 13:19:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:04.503 13:19:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.503 13:19:01 -- common/autotest_common.sh@1187 -- # return 0 00:08:04.503 13:19:01 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:04.503 13:19:01 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:04.503 13:19:01 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:04.503 13:19:01 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:04.503 13:19:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:04.503 13:19:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:04.503 13:19:01 -- setup/common.sh@80 -- # echo 536870912 00:08:04.503 13:19:01 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:04.503 13:19:01 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:04.503 13:19:01 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:04.503 13:19:01 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:04.765 13:19:02 -- target/filesystem.sh@69 -- # partprobe 00:08:04.765 13:19:02 -- target/filesystem.sh@70 -- # sleep 1 00:08:06.152 13:19:03 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:06.152 13:19:03 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:06.152 13:19:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:06.152 13:19:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.152 13:19:03 -- common/autotest_common.sh@10 -- # set +x 00:08:06.152 ************************************ 00:08:06.152 START TEST filesystem_ext4 00:08:06.152 ************************************ 00:08:06.152 13:19:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:06.152 13:19:03 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.152 13:19:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.152 13:19:03 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.152 13:19:03 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:06.152 13:19:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:06.152 13:19:03 -- common/autotest_common.sh@904 -- # local i=0 00:08:06.152 13:19:03 -- common/autotest_common.sh@905 -- # local force 00:08:06.152 13:19:03 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:06.152 13:19:03 -- common/autotest_common.sh@908 -- # force=-F 00:08:06.153 13:19:03 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.153 mke2fs 1.46.5 (30-Dec-2021) 00:08:06.153 Discarding device blocks: 0/522240 done 00:08:06.153 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:06.153 Filesystem UUID: d805844c-d2c8-4ac5-93b2-e1479083ad39 00:08:06.153 Superblock backups stored on blocks: 00:08:06.153 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:06.153 00:08:06.153 Allocating group tables: 0/64 done 00:08:06.153 Writing inode tables: 0/64 done 00:08:08.702 Creating journal (8192 blocks): done 00:08:09.707 Writing superblocks and filesystem accounting information: 0/64 done 00:08:09.707 00:08:09.707 13:19:06 -- common/autotest_common.sh@921 -- # return 0 00:08:09.707 13:19:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.280 13:19:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.280 13:19:07 -- target/filesystem.sh@25 -- # sync 00:08:10.280 13:19:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.280 13:19:07 -- target/filesystem.sh@27 -- # sync 00:08:10.280 13:19:07 -- target/filesystem.sh@29 -- # i=0 00:08:10.280 13:19:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.280 13:19:07 -- target/filesystem.sh@37 -- # kill -0 782783 00:08:10.280 13:19:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.280 13:19:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.280 13:19:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.280 13:19:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.280 00:08:10.280 real 0m4.446s 00:08:10.280 user 0m0.029s 00:08:10.280 sys 0m0.070s 00:08:10.280 13:19:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.280 13:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:10.280 ************************************ 00:08:10.280 END TEST filesystem_ext4 00:08:10.280 ************************************ 00:08:10.280 13:19:07 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:10.280 13:19:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:10.280 13:19:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.280 13:19:07 -- common/autotest_common.sh@10 -- # set +x 00:08:10.280 ************************************ 00:08:10.280 START TEST filesystem_btrfs 00:08:10.280 ************************************ 00:08:10.280 13:19:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:10.280 13:19:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:10.280 13:19:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.280 13:19:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:10.280 13:19:07 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:10.280 13:19:07 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:10.280 13:19:07 -- common/autotest_common.sh@904 -- # local i=0 00:08:10.280 13:19:07 -- common/autotest_common.sh@905 -- # local force 00:08:10.280 13:19:07 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:10.280 13:19:07 -- common/autotest_common.sh@910 -- # force=-f 00:08:10.280 13:19:07 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:10.542 btrfs-progs v6.6.2 00:08:10.542 See https://btrfs.readthedocs.io for more information. 00:08:10.542 00:08:10.542 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:10.542 NOTE: several default settings have changed in version 5.15, please make sure 00:08:10.542 this does not affect your deployments: 00:08:10.542 - DUP for metadata (-m dup) 00:08:10.542 - enabled no-holes (-O no-holes) 00:08:10.542 - enabled free-space-tree (-R free-space-tree) 00:08:10.542 00:08:10.542 Label: (null) 00:08:10.542 UUID: ca6cf2f3-412d-495b-b03f-471013fc11b9 00:08:10.542 Node size: 16384 00:08:10.542 Sector size: 4096 00:08:10.542 Filesystem size: 510.00MiB 00:08:10.542 Block group profiles: 00:08:10.542 Data: single 8.00MiB 00:08:10.542 Metadata: DUP 32.00MiB 00:08:10.542 System: DUP 8.00MiB 00:08:10.542 SSD detected: yes 00:08:10.542 Zoned device: no 00:08:10.542 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:10.542 Runtime features: free-space-tree 00:08:10.542 Checksum: crc32c 00:08:10.542 Number of devices: 1 00:08:10.542 Devices: 00:08:10.542 ID SIZE PATH 00:08:10.542 1 510.00MiB /dev/nvme0n1p1 00:08:10.542 00:08:10.542 13:19:07 -- common/autotest_common.sh@921 -- # return 0 00:08:10.542 13:19:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.114 13:19:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.114 13:19:08 -- target/filesystem.sh@25 -- # sync 00:08:11.114 13:19:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.114 13:19:08 -- target/filesystem.sh@27 -- # sync 00:08:11.114 13:19:08 -- target/filesystem.sh@29 -- # i=0 00:08:11.114 13:19:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.114 13:19:08 -- target/filesystem.sh@37 -- # kill -0 782783 00:08:11.114 13:19:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.114 13:19:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.114 13:19:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.114 13:19:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.114 00:08:11.114 real 0m0.683s 00:08:11.114 user 0m0.031s 00:08:11.114 sys 0m0.126s 00:08:11.114 13:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.114 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:08:11.114 ************************************ 00:08:11.114 END TEST filesystem_btrfs 00:08:11.114 ************************************ 00:08:11.114 13:19:08 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:11.114 13:19:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:11.114 13:19:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.114 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:08:11.114 ************************************ 00:08:11.114 START TEST filesystem_xfs 00:08:11.114 ************************************ 00:08:11.114 13:19:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:11.114 13:19:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:11.114 13:19:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.114 13:19:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:11.114 13:19:08 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:11.114 13:19:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:11.114 13:19:08 -- common/autotest_common.sh@904 -- # local i=0 00:08:11.114 13:19:08 -- common/autotest_common.sh@905 -- # local force 00:08:11.114 13:19:08 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:11.114 13:19:08 -- common/autotest_common.sh@910 -- # force=-f 00:08:11.114 13:19:08 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:11.114 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:11.114 = sectsz=512 attr=2, projid32bit=1 00:08:11.114 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:11.114 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:11.114 data = bsize=4096 blocks=130560, imaxpct=25 00:08:11.114 = sunit=0 swidth=0 blks 00:08:11.114 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:11.114 log =internal log bsize=4096 blocks=16384, version=2 00:08:11.114 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:11.114 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:12.501 Discarding blocks...Done. 00:08:12.501 13:19:09 -- common/autotest_common.sh@921 -- # return 0 00:08:12.501 13:19:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.050 13:19:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.050 13:19:12 -- target/filesystem.sh@25 -- # sync 00:08:15.050 13:19:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.050 13:19:12 -- target/filesystem.sh@27 -- # sync 00:08:15.050 13:19:12 -- target/filesystem.sh@29 -- # i=0 00:08:15.050 13:19:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.050 13:19:12 -- target/filesystem.sh@37 -- # kill -0 782783 00:08:15.050 13:19:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.050 13:19:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.050 13:19:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.050 13:19:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.050 00:08:15.050 real 0m3.894s 00:08:15.050 user 0m0.032s 00:08:15.050 sys 0m0.070s 00:08:15.050 13:19:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.050 13:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.050 ************************************ 00:08:15.050 END TEST filesystem_xfs 00:08:15.050 ************************************ 00:08:15.050 13:19:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:15.310 13:19:12 -- target/filesystem.sh@93 -- # sync 00:08:15.310 13:19:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.310 13:19:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.310 13:19:12 -- common/autotest_common.sh@1198 -- # local i=0 00:08:15.310 13:19:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:15.310 13:19:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.310 13:19:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:15.310 13:19:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.310 13:19:12 -- common/autotest_common.sh@1210 -- # return 0 00:08:15.310 13:19:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.310 13:19:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.310 13:19:12 -- common/autotest_common.sh@10 -- # set +x 00:08:15.571 13:19:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.571 13:19:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:15.571 13:19:12 -- target/filesystem.sh@101 -- # killprocess 782783 00:08:15.571 13:19:12 -- common/autotest_common.sh@926 -- # '[' -z 782783 ']' 00:08:15.571 13:19:12 -- common/autotest_common.sh@930 -- # kill -0 782783 00:08:15.571 13:19:12 -- common/autotest_common.sh@931 -- # uname 00:08:15.571 13:19:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.571 13:19:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 782783 00:08:15.571 13:19:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:15.571 13:19:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:15.571 13:19:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 782783' 00:08:15.571 killing process with pid 782783 00:08:15.571 13:19:12 -- common/autotest_common.sh@945 -- # kill 782783 00:08:15.571 13:19:12 -- common/autotest_common.sh@950 -- # wait 782783 00:08:15.832 13:19:13 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:15.832 00:08:15.832 real 0m16.098s 00:08:15.832 user 1m3.632s 00:08:15.832 sys 0m1.207s 00:08:15.832 13:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.832 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.832 ************************************ 00:08:15.832 END TEST nvmf_filesystem_no_in_capsule 00:08:15.832 ************************************ 00:08:15.832 13:19:13 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:15.832 13:19:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.832 13:19:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.832 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.832 ************************************ 00:08:15.832 START TEST nvmf_filesystem_in_capsule 00:08:15.832 ************************************ 00:08:15.832 13:19:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:15.832 13:19:13 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:15.832 13:19:13 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:15.832 13:19:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.832 13:19:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:15.832 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.832 13:19:13 -- nvmf/common.sh@469 -- # nvmfpid=786217 00:08:15.832 13:19:13 -- nvmf/common.sh@470 -- # waitforlisten 786217 00:08:15.832 13:19:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.832 13:19:13 -- common/autotest_common.sh@819 -- # '[' -z 786217 ']' 00:08:15.832 13:19:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.832 13:19:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:15.832 13:19:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.832 13:19:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:15.832 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:15.832 [2024-07-26 13:19:13.163623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:15.832 [2024-07-26 13:19:13.163671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.832 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.832 [2024-07-26 13:19:13.228197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.832 [2024-07-26 13:19:13.256068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.832 [2024-07-26 13:19:13.256210] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.832 [2024-07-26 13:19:13.256220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.832 [2024-07-26 13:19:13.256228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.832 [2024-07-26 13:19:13.256328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.832 [2024-07-26 13:19:13.256442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.832 [2024-07-26 13:19:13.256588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.832 [2024-07-26 13:19:13.256589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.798 13:19:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:16.799 13:19:13 -- common/autotest_common.sh@852 -- # return 0 00:08:16.799 13:19:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.799 13:19:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.799 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 13:19:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.799 13:19:13 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:16.799 13:19:13 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:16.799 13:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 [2024-07-26 13:19:13.975572] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.799 13:19:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:16.799 13:19:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 Malloc1 00:08:16.799 13:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:14 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.799 13:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 13:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:14 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:16.799 13:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 13:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:14 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.799 13:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 [2024-07-26 13:19:14.099239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.799 13:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:14 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:16.799 13:19:14 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:16.799 13:19:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:16.799 13:19:14 -- common/autotest_common.sh@1359 -- # local bs 00:08:16.799 13:19:14 -- common/autotest_common.sh@1360 -- # local nb 00:08:16.799 13:19:14 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:16.799 13:19:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.799 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:08:16.799 13:19:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.799 13:19:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:16.799 { 00:08:16.799 "name": "Malloc1", 00:08:16.799 "aliases": [ 00:08:16.799 "47c85ef8-b3ac-4c10-b7da-afc631568331" 00:08:16.799 ], 00:08:16.799 "product_name": "Malloc disk", 00:08:16.799 "block_size": 512, 00:08:16.799 "num_blocks": 1048576, 00:08:16.799 "uuid": "47c85ef8-b3ac-4c10-b7da-afc631568331", 00:08:16.799 "assigned_rate_limits": { 00:08:16.799 "rw_ios_per_sec": 0, 00:08:16.799 "rw_mbytes_per_sec": 0, 00:08:16.799 "r_mbytes_per_sec": 0, 00:08:16.799 "w_mbytes_per_sec": 0 00:08:16.799 }, 00:08:16.799 "claimed": true, 00:08:16.799 "claim_type": "exclusive_write", 00:08:16.799 "zoned": false, 00:08:16.799 "supported_io_types": { 00:08:16.799 "read": true, 00:08:16.799 "write": true, 00:08:16.799 "unmap": true, 00:08:16.799 "write_zeroes": true, 00:08:16.799 "flush": true, 00:08:16.799 "reset": true, 00:08:16.799 "compare": false, 00:08:16.799 "compare_and_write": false, 00:08:16.799 "abort": true, 00:08:16.799 "nvme_admin": false, 00:08:16.799 "nvme_io": false 00:08:16.799 }, 00:08:16.799 "memory_domains": [ 00:08:16.799 { 00:08:16.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.799 "dma_device_type": 2 00:08:16.799 } 00:08:16.799 ], 00:08:16.799 "driver_specific": {} 00:08:16.799 } 00:08:16.799 ]' 00:08:16.799 13:19:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:16.799 13:19:14 -- common/autotest_common.sh@1362 -- # bs=512 00:08:16.799 13:19:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:16.799 13:19:14 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:16.799 13:19:14 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:16.799 13:19:14 -- common/autotest_common.sh@1367 -- # echo 512 00:08:16.799 13:19:14 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:16.799 13:19:14 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.755 13:19:15 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.755 13:19:15 -- common/autotest_common.sh@1177 -- # local i=0 00:08:18.755 13:19:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.755 13:19:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:18.755 13:19:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:20.672 13:19:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:20.672 13:19:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:20.672 13:19:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.672 13:19:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:20.672 13:19:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.672 13:19:17 -- common/autotest_common.sh@1187 -- # return 0 00:08:20.672 13:19:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:20.672 13:19:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:20.672 13:19:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:20.672 13:19:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:20.672 13:19:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:20.672 13:19:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:20.672 13:19:17 -- setup/common.sh@80 -- # echo 536870912 00:08:20.672 13:19:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:20.672 13:19:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:20.672 13:19:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:20.672 13:19:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:20.672 13:19:17 -- target/filesystem.sh@69 -- # partprobe 00:08:20.933 13:19:18 -- target/filesystem.sh@70 -- # sleep 1 00:08:21.877 13:19:19 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:21.877 13:19:19 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:21.877 13:19:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:21.877 13:19:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.877 13:19:19 -- common/autotest_common.sh@10 -- # set +x 00:08:21.877 ************************************ 00:08:21.877 START TEST filesystem_in_capsule_ext4 00:08:21.877 ************************************ 00:08:21.877 13:19:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:21.877 13:19:19 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:21.877 13:19:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.877 13:19:19 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:21.877 13:19:19 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:21.877 13:19:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:21.877 13:19:19 -- common/autotest_common.sh@904 -- # local i=0 00:08:21.877 13:19:19 -- common/autotest_common.sh@905 -- # local force 00:08:21.877 13:19:19 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:21.877 13:19:19 -- common/autotest_common.sh@908 -- # force=-F 00:08:21.877 13:19:19 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:21.877 mke2fs 1.46.5 (30-Dec-2021) 00:08:22.138 Discarding device blocks: 0/522240 done 00:08:22.138 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:22.138 Filesystem UUID: 1071f2dc-8cbe-4f0f-bf65-0a4196714d9f 00:08:22.138 Superblock backups stored on blocks: 00:08:22.138 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:22.138 00:08:22.138 Allocating group tables: 0/64 done 00:08:22.138 Writing inode tables: 0/64 done 00:08:22.138 Creating journal (8192 blocks): done 00:08:23.233 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:08:23.233 00:08:23.234 13:19:20 -- common/autotest_common.sh@921 -- # return 0 00:08:23.234 13:19:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.806 13:19:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.806 13:19:21 -- target/filesystem.sh@25 -- # sync 00:08:23.806 13:19:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.806 13:19:21 -- target/filesystem.sh@27 -- # sync 00:08:23.806 13:19:21 -- target/filesystem.sh@29 -- # i=0 00:08:23.806 13:19:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.806 13:19:21 -- target/filesystem.sh@37 -- # kill -0 786217 00:08:23.806 13:19:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.806 13:19:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.806 13:19:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.806 13:19:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.806 00:08:23.806 real 0m1.897s 00:08:23.806 user 0m0.036s 00:08:23.806 sys 0m0.060s 00:08:23.806 13:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.806 13:19:21 -- common/autotest_common.sh@10 -- # set +x 00:08:23.806 ************************************ 00:08:23.806 END TEST filesystem_in_capsule_ext4 00:08:23.806 ************************************ 00:08:23.806 13:19:21 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:23.806 13:19:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:23.806 13:19:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.806 13:19:21 -- common/autotest_common.sh@10 -- # set +x 00:08:23.806 ************************************ 00:08:23.806 START TEST filesystem_in_capsule_btrfs 00:08:23.806 ************************************ 00:08:23.806 13:19:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:23.806 13:19:21 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:23.806 13:19:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.806 13:19:21 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:23.806 13:19:21 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:23.806 13:19:21 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:23.806 13:19:21 -- common/autotest_common.sh@904 -- # local i=0 00:08:23.806 13:19:21 -- common/autotest_common.sh@905 -- # local force 00:08:23.806 13:19:21 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:23.806 13:19:21 -- common/autotest_common.sh@910 -- # force=-f 00:08:23.806 13:19:21 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:24.378 btrfs-progs v6.6.2 00:08:24.378 See https://btrfs.readthedocs.io for more information. 00:08:24.378 00:08:24.379 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:24.379 NOTE: several default settings have changed in version 5.15, please make sure 00:08:24.379 this does not affect your deployments: 00:08:24.379 - DUP for metadata (-m dup) 00:08:24.379 - enabled no-holes (-O no-holes) 00:08:24.379 - enabled free-space-tree (-R free-space-tree) 00:08:24.379 00:08:24.379 Label: (null) 00:08:24.379 UUID: 51bb924e-8d1b-47c3-985b-d9040fac09d1 00:08:24.379 Node size: 16384 00:08:24.379 Sector size: 4096 00:08:24.379 Filesystem size: 510.00MiB 00:08:24.379 Block group profiles: 00:08:24.379 Data: single 8.00MiB 00:08:24.379 Metadata: DUP 32.00MiB 00:08:24.379 System: DUP 8.00MiB 00:08:24.379 SSD detected: yes 00:08:24.379 Zoned device: no 00:08:24.379 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:24.379 Runtime features: free-space-tree 00:08:24.379 Checksum: crc32c 00:08:24.379 Number of devices: 1 00:08:24.379 Devices: 00:08:24.379 ID SIZE PATH 00:08:24.379 1 510.00MiB /dev/nvme0n1p1 00:08:24.379 00:08:24.379 13:19:21 -- common/autotest_common.sh@921 -- # return 0 00:08:24.379 13:19:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.322 13:19:22 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.322 13:19:22 -- target/filesystem.sh@25 -- # sync 00:08:25.322 13:19:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.322 13:19:22 -- target/filesystem.sh@27 -- # sync 00:08:25.322 13:19:22 -- target/filesystem.sh@29 -- # i=0 00:08:25.322 13:19:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.322 13:19:22 -- target/filesystem.sh@37 -- # kill -0 786217 00:08:25.322 13:19:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.322 13:19:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.322 13:19:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.322 13:19:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.322 00:08:25.322 real 0m1.400s 00:08:25.322 user 0m0.032s 00:08:25.322 sys 0m0.132s 00:08:25.322 13:19:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.322 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.322 ************************************ 00:08:25.322 END TEST filesystem_in_capsule_btrfs 00:08:25.322 ************************************ 00:08:25.322 13:19:22 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.322 13:19:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.322 13:19:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.322 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.322 ************************************ 00:08:25.322 START TEST filesystem_in_capsule_xfs 00:08:25.322 ************************************ 00:08:25.322 13:19:22 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:25.322 13:19:22 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:25.322 13:19:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.322 13:19:22 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:25.322 13:19:22 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:25.322 13:19:22 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.322 13:19:22 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.322 13:19:22 -- common/autotest_common.sh@905 -- # local force 00:08:25.322 13:19:22 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:25.322 13:19:22 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.322 13:19:22 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:25.322 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:25.322 = sectsz=512 attr=2, projid32bit=1 00:08:25.322 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:25.322 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:25.322 data = bsize=4096 blocks=130560, imaxpct=25 00:08:25.322 = sunit=0 swidth=0 blks 00:08:25.322 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:25.322 log =internal log bsize=4096 blocks=16384, version=2 00:08:25.322 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:25.322 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:26.265 Discarding blocks...Done. 00:08:26.265 13:19:23 -- common/autotest_common.sh@921 -- # return 0 00:08:26.265 13:19:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.814 13:19:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.814 13:19:25 -- target/filesystem.sh@25 -- # sync 00:08:28.814 13:19:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.814 13:19:25 -- target/filesystem.sh@27 -- # sync 00:08:28.814 13:19:25 -- target/filesystem.sh@29 -- # i=0 00:08:28.814 13:19:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.814 13:19:25 -- target/filesystem.sh@37 -- # kill -0 786217 00:08:28.814 13:19:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.814 13:19:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.814 13:19:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.814 13:19:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.814 00:08:28.814 real 0m3.186s 00:08:28.814 user 0m0.024s 00:08:28.814 sys 0m0.081s 00:08:28.814 13:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.814 13:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:28.814 ************************************ 00:08:28.814 END TEST filesystem_in_capsule_xfs 00:08:28.814 ************************************ 00:08:28.814 13:19:25 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.814 13:19:26 -- target/filesystem.sh@93 -- # sync 00:08:28.814 13:19:26 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.076 13:19:26 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.076 13:19:26 -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.076 13:19:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:29.076 13:19:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.076 13:19:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:29.076 13:19:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.076 13:19:26 -- common/autotest_common.sh@1210 -- # return 0 00:08:29.076 13:19:26 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.076 13:19:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.076 13:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:29.076 13:19:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.076 13:19:26 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:29.076 13:19:26 -- target/filesystem.sh@101 -- # killprocess 786217 00:08:29.076 13:19:26 -- common/autotest_common.sh@926 -- # '[' -z 786217 ']' 00:08:29.076 13:19:26 -- common/autotest_common.sh@930 -- # kill -0 786217 00:08:29.076 13:19:26 -- common/autotest_common.sh@931 -- # uname 00:08:29.076 13:19:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:29.076 13:19:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 786217 00:08:29.076 13:19:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:29.076 13:19:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:29.076 13:19:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 786217' 00:08:29.076 killing process with pid 786217 00:08:29.076 13:19:26 -- common/autotest_common.sh@945 -- # kill 786217 00:08:29.076 13:19:26 -- common/autotest_common.sh@950 -- # wait 786217 00:08:29.337 13:19:26 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.337 00:08:29.337 real 0m13.510s 00:08:29.337 user 0m53.368s 00:08:29.337 sys 0m1.196s 00:08:29.337 13:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.337 13:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:29.337 ************************************ 00:08:29.337 END TEST nvmf_filesystem_in_capsule 00:08:29.337 ************************************ 00:08:29.337 13:19:26 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:29.337 13:19:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.337 13:19:26 -- nvmf/common.sh@116 -- # sync 00:08:29.337 13:19:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:29.337 13:19:26 -- nvmf/common.sh@119 -- # set +e 00:08:29.337 13:19:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.337 13:19:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:29.337 rmmod nvme_tcp 00:08:29.337 rmmod nvme_fabrics 00:08:29.337 rmmod nvme_keyring 00:08:29.337 13:19:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.337 13:19:26 -- nvmf/common.sh@123 -- # set -e 00:08:29.337 13:19:26 -- nvmf/common.sh@124 -- # return 0 00:08:29.337 13:19:26 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:29.337 13:19:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.337 13:19:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:29.337 13:19:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:29.337 13:19:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.337 13:19:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:29.338 13:19:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.338 13:19:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.338 13:19:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.887 13:19:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:31.887 00:08:31.887 real 0m38.865s 00:08:31.887 user 1m58.854s 00:08:31.887 sys 0m7.709s 00:08:31.887 13:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.887 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:08:31.887 ************************************ 00:08:31.887 END TEST nvmf_filesystem 00:08:31.887 ************************************ 00:08:31.887 13:19:28 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.887 13:19:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:31.887 13:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.887 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:08:31.887 ************************************ 00:08:31.887 START TEST nvmf_discovery 00:08:31.887 ************************************ 00:08:31.887 13:19:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:31.887 * Looking for test storage... 00:08:31.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.888 13:19:28 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.888 13:19:28 -- nvmf/common.sh@7 -- # uname -s 00:08:31.888 13:19:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.888 13:19:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.888 13:19:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.888 13:19:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.888 13:19:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.888 13:19:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.888 13:19:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.888 13:19:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.888 13:19:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.888 13:19:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.888 13:19:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.888 13:19:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.888 13:19:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.888 13:19:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.888 13:19:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.888 13:19:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.888 13:19:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.888 13:19:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.888 13:19:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.888 13:19:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.888 13:19:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.888 13:19:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.888 13:19:28 -- paths/export.sh@5 -- # export PATH 00:08:31.888 13:19:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.888 13:19:28 -- nvmf/common.sh@46 -- # : 0 00:08:31.888 13:19:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:31.888 13:19:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:31.888 13:19:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:31.888 13:19:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.888 13:19:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.888 13:19:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:31.888 13:19:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:31.888 13:19:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:31.888 13:19:28 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:31.888 13:19:28 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:31.888 13:19:28 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:31.888 13:19:28 -- target/discovery.sh@15 -- # hash nvme 00:08:31.888 13:19:28 -- target/discovery.sh@20 -- # nvmftestinit 00:08:31.888 13:19:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:31.888 13:19:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.888 13:19:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:31.888 13:19:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:31.888 13:19:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:31.888 13:19:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.888 13:19:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.888 13:19:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.888 13:19:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:31.888 13:19:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:31.888 13:19:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:31.888 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:08:38.486 13:19:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.486 13:19:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.486 13:19:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.486 13:19:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.486 13:19:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.486 13:19:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.486 13:19:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.486 13:19:35 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.486 13:19:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.486 13:19:35 -- nvmf/common.sh@295 -- # e810=() 00:08:38.486 13:19:35 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.486 13:19:35 -- nvmf/common.sh@296 -- # x722=() 00:08:38.486 13:19:35 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.486 13:19:35 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.486 13:19:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.486 13:19:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.486 13:19:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.486 13:19:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:38.486 13:19:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.486 13:19:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.486 13:19:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:38.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:38.486 13:19:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.486 13:19:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:38.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:38.486 13:19:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.486 13:19:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:38.486 13:19:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.486 13:19:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.486 13:19:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.486 13:19:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.486 13:19:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:38.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:38.486 13:19:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.486 13:19:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.486 13:19:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.486 13:19:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.486 13:19:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.486 13:19:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:38.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:38.486 13:19:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.486 13:19:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.486 13:19:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.486 13:19:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.487 13:19:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:38.487 13:19:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:38.487 13:19:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.487 13:19:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.487 13:19:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.487 13:19:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:38.487 13:19:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.487 13:19:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.487 13:19:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:38.487 13:19:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.487 13:19:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.487 13:19:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:38.487 13:19:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:38.487 13:19:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.487 13:19:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.487 13:19:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.487 13:19:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.487 13:19:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:38.487 13:19:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.487 13:19:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.798 13:19:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.798 13:19:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:38.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:08:38.798 00:08:38.798 --- 10.0.0.2 ping statistics --- 00:08:38.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.798 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:08:38.798 13:19:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:08:38.798 00:08:38.798 --- 10.0.0.1 ping statistics --- 00:08:38.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.798 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:08:38.798 13:19:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.798 13:19:36 -- nvmf/common.sh@410 -- # return 0 00:08:38.798 13:19:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.798 13:19:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.798 13:19:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.798 13:19:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.798 13:19:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.798 13:19:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.798 13:19:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.798 13:19:36 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:38.798 13:19:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.798 13:19:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:38.798 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.798 13:19:36 -- nvmf/common.sh@469 -- # nvmfpid=793218 00:08:38.798 13:19:36 -- nvmf/common.sh@470 -- # waitforlisten 793218 00:08:38.798 13:19:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.798 13:19:36 -- common/autotest_common.sh@819 -- # '[' -z 793218 ']' 00:08:38.798 13:19:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.798 13:19:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:38.798 13:19:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.798 13:19:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:38.798 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.798 [2024-07-26 13:19:36.118564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:38.798 [2024-07-26 13:19:36.118636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.798 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.798 [2024-07-26 13:19:36.189217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.798 [2024-07-26 13:19:36.228023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.798 [2024-07-26 13:19:36.228167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.798 [2024-07-26 13:19:36.228177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.798 [2024-07-26 13:19:36.228186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.798 [2024-07-26 13:19:36.228272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.798 [2024-07-26 13:19:36.228436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.798 [2024-07-26 13:19:36.228473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.798 [2024-07-26 13:19:36.228475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.766 13:19:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:39.766 13:19:36 -- common/autotest_common.sh@852 -- # return 0 00:08:39.766 13:19:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.766 13:19:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.766 13:19:36 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 [2024-07-26 13:19:36.936527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.766 13:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:36 -- target/discovery.sh@26 -- # seq 1 4 00:08:39.766 13:19:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.766 13:19:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 Null1 00:08:39.766 13:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 [2024-07-26 13:19:36.992832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.766 13:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.766 13:19:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:39.766 13:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 Null2 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.766 13:19:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 Null3 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.766 13:19:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.766 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.766 Null4 00:08:39.766 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.766 13:19:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:39.766 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.767 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.767 13:19:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:39.767 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.767 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.767 13:19:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:39.767 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.767 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.767 13:19:37 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.767 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.767 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.767 13:19:37 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:39.767 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.767 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.767 13:19:37 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:40.028 00:08:40.028 Discovery Log Number of Records 6, Generation counter 6 00:08:40.028 =====Discovery Log Entry 0====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: current discovery subsystem 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4420 00:08:40.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:40.028 traddr: 10.0.0.2 00:08:40.028 eflags: explicit discovery connections, duplicate discovery information 00:08:40.028 sectype: none 00:08:40.028 =====Discovery Log Entry 1====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: nvme subsystem 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4420 00:08:40.028 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:40.028 traddr: 10.0.0.2 00:08:40.028 eflags: none 00:08:40.028 sectype: none 00:08:40.028 =====Discovery Log Entry 2====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: nvme subsystem 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4420 00:08:40.028 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:40.028 traddr: 10.0.0.2 00:08:40.028 eflags: none 00:08:40.028 sectype: none 00:08:40.028 =====Discovery Log Entry 3====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: nvme subsystem 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4420 00:08:40.028 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:40.028 traddr: 10.0.0.2 00:08:40.028 eflags: none 00:08:40.028 sectype: none 00:08:40.028 =====Discovery Log Entry 4====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: nvme subsystem 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4420 00:08:40.028 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:40.028 traddr: 10.0.0.2 00:08:40.028 eflags: none 00:08:40.028 sectype: none 00:08:40.028 =====Discovery Log Entry 5====== 00:08:40.028 trtype: tcp 00:08:40.028 adrfam: ipv4 00:08:40.028 subtype: discovery subsystem referral 00:08:40.028 treq: not required 00:08:40.028 portid: 0 00:08:40.028 trsvcid: 4430 00:08:40.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:40.029 traddr: 10.0.0.2 00:08:40.029 eflags: none 00:08:40.029 sectype: none 00:08:40.029 13:19:37 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:40.029 Perform nvmf subsystem discovery via RPC 00:08:40.029 13:19:37 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 [2024-07-26 13:19:37.373980] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:40.029 [ 00:08:40.029 { 00:08:40.029 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:40.029 "subtype": "Discovery", 00:08:40.029 "listen_addresses": [ 00:08:40.029 { 00:08:40.029 "transport": "TCP", 00:08:40.029 "trtype": "TCP", 00:08:40.029 "adrfam": "IPv4", 00:08:40.029 "traddr": "10.0.0.2", 00:08:40.029 "trsvcid": "4420" 00:08:40.029 } 00:08:40.029 ], 00:08:40.029 "allow_any_host": true, 00:08:40.029 "hosts": [] 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.029 "subtype": "NVMe", 00:08:40.029 "listen_addresses": [ 00:08:40.029 { 00:08:40.029 "transport": "TCP", 00:08:40.029 "trtype": "TCP", 00:08:40.029 "adrfam": "IPv4", 00:08:40.029 "traddr": "10.0.0.2", 00:08:40.029 "trsvcid": "4420" 00:08:40.029 } 00:08:40.029 ], 00:08:40.029 "allow_any_host": true, 00:08:40.029 "hosts": [], 00:08:40.029 "serial_number": "SPDK00000000000001", 00:08:40.029 "model_number": "SPDK bdev Controller", 00:08:40.029 "max_namespaces": 32, 00:08:40.029 "min_cntlid": 1, 00:08:40.029 "max_cntlid": 65519, 00:08:40.029 "namespaces": [ 00:08:40.029 { 00:08:40.029 "nsid": 1, 00:08:40.029 "bdev_name": "Null1", 00:08:40.029 "name": "Null1", 00:08:40.029 "nguid": "3880D78470544D0AAC81DC9D888E335C", 00:08:40.029 "uuid": "3880d784-7054-4d0a-ac81-dc9d888e335c" 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:40.029 "subtype": "NVMe", 00:08:40.029 "listen_addresses": [ 00:08:40.029 { 00:08:40.029 "transport": "TCP", 00:08:40.029 "trtype": "TCP", 00:08:40.029 "adrfam": "IPv4", 00:08:40.029 "traddr": "10.0.0.2", 00:08:40.029 "trsvcid": "4420" 00:08:40.029 } 00:08:40.029 ], 00:08:40.029 "allow_any_host": true, 00:08:40.029 "hosts": [], 00:08:40.029 "serial_number": "SPDK00000000000002", 00:08:40.029 "model_number": "SPDK bdev Controller", 00:08:40.029 "max_namespaces": 32, 00:08:40.029 "min_cntlid": 1, 00:08:40.029 "max_cntlid": 65519, 00:08:40.029 "namespaces": [ 00:08:40.029 { 00:08:40.029 "nsid": 1, 00:08:40.029 "bdev_name": "Null2", 00:08:40.029 "name": "Null2", 00:08:40.029 "nguid": "E93C001103804B7EA60B2D306031338D", 00:08:40.029 "uuid": "e93c0011-0380-4b7e-a60b-2d306031338d" 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:40.029 "subtype": "NVMe", 00:08:40.029 "listen_addresses": [ 00:08:40.029 { 00:08:40.029 "transport": "TCP", 00:08:40.029 "trtype": "TCP", 00:08:40.029 "adrfam": "IPv4", 00:08:40.029 "traddr": "10.0.0.2", 00:08:40.029 "trsvcid": "4420" 00:08:40.029 } 00:08:40.029 ], 00:08:40.029 "allow_any_host": true, 00:08:40.029 "hosts": [], 00:08:40.029 "serial_number": "SPDK00000000000003", 00:08:40.029 "model_number": "SPDK bdev Controller", 00:08:40.029 "max_namespaces": 32, 00:08:40.029 "min_cntlid": 1, 00:08:40.029 "max_cntlid": 65519, 00:08:40.029 "namespaces": [ 00:08:40.029 { 00:08:40.029 "nsid": 1, 00:08:40.029 "bdev_name": "Null3", 00:08:40.029 "name": "Null3", 00:08:40.029 "nguid": "2B008E621C0B4C69BA8E2434BB91DA19", 00:08:40.029 "uuid": "2b008e62-1c0b-4c69-ba8e-2434bb91da19" 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 }, 00:08:40.029 { 00:08:40.029 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:40.029 "subtype": "NVMe", 00:08:40.029 "listen_addresses": [ 00:08:40.029 { 00:08:40.029 "transport": "TCP", 00:08:40.029 "trtype": "TCP", 00:08:40.029 "adrfam": "IPv4", 00:08:40.029 "traddr": "10.0.0.2", 00:08:40.029 "trsvcid": "4420" 00:08:40.029 } 00:08:40.029 ], 00:08:40.029 "allow_any_host": true, 00:08:40.029 "hosts": [], 00:08:40.029 "serial_number": "SPDK00000000000004", 00:08:40.029 "model_number": "SPDK bdev Controller", 00:08:40.029 "max_namespaces": 32, 00:08:40.029 "min_cntlid": 1, 00:08:40.029 "max_cntlid": 65519, 00:08:40.029 "namespaces": [ 00:08:40.029 { 00:08:40.029 "nsid": 1, 00:08:40.029 "bdev_name": "Null4", 00:08:40.029 "name": "Null4", 00:08:40.029 "nguid": "3BD717AC6FC9473AB4943A261620FF61", 00:08:40.029 "uuid": "3bd717ac-6fc9-473a-b494-3a261620ff61" 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 } 00:08:40.029 ] 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@42 -- # seq 1 4 00:08:40.029 13:19:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.029 13:19:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.029 13:19:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.029 13:19:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.029 13:19:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.029 13:19:37 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:40.029 13:19:37 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:40.029 13:19:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.029 13:19:37 -- common/autotest_common.sh@10 -- # set +x 00:08:40.029 13:19:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.290 13:19:37 -- target/discovery.sh@49 -- # check_bdevs= 00:08:40.290 13:19:37 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:40.290 13:19:37 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:40.290 13:19:37 -- target/discovery.sh@57 -- # nvmftestfini 00:08:40.290 13:19:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.290 13:19:37 -- nvmf/common.sh@116 -- # sync 00:08:40.290 13:19:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:40.290 13:19:37 -- nvmf/common.sh@119 -- # set +e 00:08:40.290 13:19:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.290 13:19:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:40.290 rmmod nvme_tcp 00:08:40.290 rmmod nvme_fabrics 00:08:40.290 rmmod nvme_keyring 00:08:40.290 13:19:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.290 13:19:37 -- nvmf/common.sh@123 -- # set -e 00:08:40.290 13:19:37 -- nvmf/common.sh@124 -- # return 0 00:08:40.290 13:19:37 -- nvmf/common.sh@477 -- # '[' -n 793218 ']' 00:08:40.290 13:19:37 -- nvmf/common.sh@478 -- # killprocess 793218 00:08:40.290 13:19:37 -- common/autotest_common.sh@926 -- # '[' -z 793218 ']' 00:08:40.290 13:19:37 -- common/autotest_common.sh@930 -- # kill -0 793218 00:08:40.290 13:19:37 -- common/autotest_common.sh@931 -- # uname 00:08:40.290 13:19:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:40.290 13:19:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 793218 00:08:40.290 13:19:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:40.290 13:19:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:40.290 13:19:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 793218' 00:08:40.290 killing process with pid 793218 00:08:40.290 13:19:37 -- common/autotest_common.sh@945 -- # kill 793218 00:08:40.290 [2024-07-26 13:19:37.643930] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:40.290 13:19:37 -- common/autotest_common.sh@950 -- # wait 793218 00:08:40.290 13:19:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.290 13:19:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:40.290 13:19:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:40.290 13:19:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.290 13:19:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:40.290 13:19:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.290 13:19:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.290 13:19:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.836 13:19:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:42.836 00:08:42.836 real 0m10.994s 00:08:42.836 user 0m8.214s 00:08:42.836 sys 0m5.699s 00:08:42.836 13:19:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.836 13:19:39 -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 ************************************ 00:08:42.836 END TEST nvmf_discovery 00:08:42.836 ************************************ 00:08:42.836 13:19:39 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:42.836 13:19:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:42.836 13:19:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.836 13:19:39 -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 ************************************ 00:08:42.836 START TEST nvmf_referrals 00:08:42.836 ************************************ 00:08:42.836 13:19:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:42.836 * Looking for test storage... 00:08:42.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.836 13:19:39 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.836 13:19:39 -- nvmf/common.sh@7 -- # uname -s 00:08:42.836 13:19:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.836 13:19:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.836 13:19:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.836 13:19:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.836 13:19:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.836 13:19:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.836 13:19:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.836 13:19:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.836 13:19:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.836 13:19:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.836 13:19:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.836 13:19:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.836 13:19:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.836 13:19:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.836 13:19:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.836 13:19:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.836 13:19:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.836 13:19:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.836 13:19:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.836 13:19:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:19:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:19:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:19:40 -- paths/export.sh@5 -- # export PATH 00:08:42.836 13:19:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:19:40 -- nvmf/common.sh@46 -- # : 0 00:08:42.836 13:19:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.836 13:19:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.836 13:19:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.836 13:19:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.836 13:19:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.836 13:19:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.836 13:19:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.836 13:19:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.836 13:19:40 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:42.836 13:19:40 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:42.836 13:19:40 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:42.836 13:19:40 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:42.836 13:19:40 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:42.836 13:19:40 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:42.836 13:19:40 -- target/referrals.sh@37 -- # nvmftestinit 00:08:42.836 13:19:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.836 13:19:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.836 13:19:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.836 13:19:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.836 13:19:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.836 13:19:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.836 13:19:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.836 13:19:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.836 13:19:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:42.836 13:19:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:42.836 13:19:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:42.836 13:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:49.427 13:19:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.427 13:19:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:49.427 13:19:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:49.427 13:19:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:49.427 13:19:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:49.427 13:19:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:49.427 13:19:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:49.427 13:19:46 -- nvmf/common.sh@294 -- # net_devs=() 00:08:49.427 13:19:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:49.427 13:19:46 -- nvmf/common.sh@295 -- # e810=() 00:08:49.427 13:19:46 -- nvmf/common.sh@295 -- # local -ga e810 00:08:49.427 13:19:46 -- nvmf/common.sh@296 -- # x722=() 00:08:49.427 13:19:46 -- nvmf/common.sh@296 -- # local -ga x722 00:08:49.427 13:19:46 -- nvmf/common.sh@297 -- # mlx=() 00:08:49.427 13:19:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:49.427 13:19:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.427 13:19:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.428 13:19:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.428 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.428 13:19:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.428 13:19:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.428 13:19:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.428 13:19:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.428 13:19:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.428 13:19:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.428 13:19:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.428 13:19:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.428 13:19:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.428 13:19:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.428 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.428 13:19:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:49.428 13:19:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:49.428 13:19:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.428 13:19:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.428 13:19:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:49.428 13:19:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.428 13:19:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.428 13:19:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:49.428 13:19:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.428 13:19:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.428 13:19:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:49.428 13:19:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:49.428 13:19:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.428 13:19:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.428 13:19:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.428 13:19:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.428 13:19:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:49.428 13:19:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.428 13:19:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.428 13:19:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.428 13:19:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:49.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:08:49.428 00:08:49.428 --- 10.0.0.2 ping statistics --- 00:08:49.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.428 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:08:49.428 13:19:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:08:49.428 00:08:49.428 --- 10.0.0.1 ping statistics --- 00:08:49.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.428 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:08:49.428 13:19:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.428 13:19:46 -- nvmf/common.sh@410 -- # return 0 00:08:49.428 13:19:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:49.428 13:19:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.428 13:19:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:49.428 13:19:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.428 13:19:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:49.428 13:19:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:49.428 13:19:46 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:49.428 13:19:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:49.428 13:19:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.428 13:19:46 -- common/autotest_common.sh@10 -- # set +x 00:08:49.428 13:19:46 -- nvmf/common.sh@469 -- # nvmfpid=797758 00:08:49.428 13:19:46 -- nvmf/common.sh@470 -- # waitforlisten 797758 00:08:49.428 13:19:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.428 13:19:46 -- common/autotest_common.sh@819 -- # '[' -z 797758 ']' 00:08:49.428 13:19:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.428 13:19:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.428 13:19:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.428 13:19:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.428 13:19:46 -- common/autotest_common.sh@10 -- # set +x 00:08:49.690 [2024-07-26 13:19:46.904242] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:49.690 [2024-07-26 13:19:46.904304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.690 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.690 [2024-07-26 13:19:46.976676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.690 [2024-07-26 13:19:47.015424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.690 [2024-07-26 13:19:47.015580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.690 [2024-07-26 13:19:47.015590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.690 [2024-07-26 13:19:47.015598] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.690 [2024-07-26 13:19:47.015744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.690 [2024-07-26 13:19:47.015864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.690 [2024-07-26 13:19:47.016024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.690 [2024-07-26 13:19:47.016025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.262 13:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.262 13:19:47 -- common/autotest_common.sh@852 -- # return 0 00:08:50.262 13:19:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:50.262 13:19:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.262 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.262 13:19:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.262 13:19:47 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.262 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.262 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.262 [2024-07-26 13:19:47.724487] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.262 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.262 13:19:47 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:50.262 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.262 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.523 [2024-07-26 13:19:47.740669] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:50.523 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.523 13:19:47 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:50.523 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.524 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:50.524 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.524 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:50.524 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.524 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.524 13:19:47 -- target/referrals.sh@48 -- # jq length 00:08:50.524 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.524 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:50.524 13:19:47 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:50.524 13:19:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:50.524 13:19:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.524 13:19:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:50.524 13:19:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.524 13:19:47 -- target/referrals.sh@21 -- # sort 00:08:50.524 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 13:19:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:50.524 13:19:47 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:50.524 13:19:47 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:50.524 13:19:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.524 13:19:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.524 13:19:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:50.524 13:19:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.524 13:19:47 -- target/referrals.sh@26 -- # sort 00:08:50.785 13:19:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:50.785 13:19:48 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:50.785 13:19:48 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:50.785 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.785 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.785 13:19:48 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:50.785 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.785 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.785 13:19:48 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:50.785 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.785 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.785 13:19:48 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.785 13:19:48 -- target/referrals.sh@56 -- # jq length 00:08:50.785 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.785 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.785 13:19:48 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:50.785 13:19:48 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:50.785 13:19:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.785 13:19:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.785 13:19:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:50.785 13:19:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.785 13:19:48 -- target/referrals.sh@26 -- # sort 00:08:51.048 13:19:48 -- target/referrals.sh@26 -- # echo 00:08:51.048 13:19:48 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:51.048 13:19:48 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:51.048 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.048 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.048 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.048 13:19:48 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:51.048 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.048 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.048 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.048 13:19:48 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:51.048 13:19:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.048 13:19:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.048 13:19:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.048 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.048 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.048 13:19:48 -- target/referrals.sh@21 -- # sort 00:08:51.048 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.048 13:19:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:51.048 13:19:48 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:51.048 13:19:48 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:51.048 13:19:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.048 13:19:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.048 13:19:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.048 13:19:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.048 13:19:48 -- target/referrals.sh@26 -- # sort 00:08:51.310 13:19:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:51.310 13:19:48 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:51.310 13:19:48 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:51.310 13:19:48 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:51.310 13:19:48 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:51.310 13:19:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.310 13:19:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:51.310 13:19:48 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:51.310 13:19:48 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:51.310 13:19:48 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:51.310 13:19:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:51.310 13:19:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.310 13:19:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:51.572 13:19:48 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:51.572 13:19:48 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:51.572 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.572 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.572 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.572 13:19:48 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:51.572 13:19:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:51.572 13:19:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.572 13:19:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:51.572 13:19:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.572 13:19:48 -- target/referrals.sh@21 -- # sort 00:08:51.572 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:08:51.572 13:19:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.572 13:19:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:51.572 13:19:48 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:51.572 13:19:48 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:51.572 13:19:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:51.572 13:19:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:51.572 13:19:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.572 13:19:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:51.572 13:19:48 -- target/referrals.sh@26 -- # sort 00:08:51.834 13:19:49 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:51.834 13:19:49 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:51.834 13:19:49 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:51.834 13:19:49 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:51.834 13:19:49 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:51.834 13:19:49 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.834 13:19:49 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:51.834 13:19:49 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:51.834 13:19:49 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:51.834 13:19:49 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:51.834 13:19:49 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:51.834 13:19:49 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:51.834 13:19:49 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:51.834 13:19:49 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:51.834 13:19:49 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:51.834 13:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.834 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.834 13:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.834 13:19:49 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:51.834 13:19:49 -- target/referrals.sh@82 -- # jq length 00:08:51.834 13:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.834 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:08:51.834 13:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.096 13:19:49 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:52.096 13:19:49 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:52.096 13:19:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:52.096 13:19:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:52.096 13:19:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:52.096 13:19:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:52.096 13:19:49 -- target/referrals.sh@26 -- # sort 00:08:52.096 13:19:49 -- target/referrals.sh@26 -- # echo 00:08:52.096 13:19:49 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:52.096 13:19:49 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:52.096 13:19:49 -- target/referrals.sh@86 -- # nvmftestfini 00:08:52.096 13:19:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:52.096 13:19:49 -- nvmf/common.sh@116 -- # sync 00:08:52.096 13:19:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:52.096 13:19:49 -- nvmf/common.sh@119 -- # set +e 00:08:52.096 13:19:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:52.096 13:19:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:52.096 rmmod nvme_tcp 00:08:52.096 rmmod nvme_fabrics 00:08:52.096 rmmod nvme_keyring 00:08:52.096 13:19:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:52.096 13:19:49 -- nvmf/common.sh@123 -- # set -e 00:08:52.096 13:19:49 -- nvmf/common.sh@124 -- # return 0 00:08:52.096 13:19:49 -- nvmf/common.sh@477 -- # '[' -n 797758 ']' 00:08:52.096 13:19:49 -- nvmf/common.sh@478 -- # killprocess 797758 00:08:52.096 13:19:49 -- common/autotest_common.sh@926 -- # '[' -z 797758 ']' 00:08:52.096 13:19:49 -- common/autotest_common.sh@930 -- # kill -0 797758 00:08:52.096 13:19:49 -- common/autotest_common.sh@931 -- # uname 00:08:52.096 13:19:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.096 13:19:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 797758 00:08:52.357 13:19:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.357 13:19:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.357 13:19:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 797758' 00:08:52.357 killing process with pid 797758 00:08:52.357 13:19:49 -- common/autotest_common.sh@945 -- # kill 797758 00:08:52.357 13:19:49 -- common/autotest_common.sh@950 -- # wait 797758 00:08:52.357 13:19:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:52.357 13:19:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:52.357 13:19:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:52.357 13:19:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.357 13:19:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:52.357 13:19:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.357 13:19:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.357 13:19:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.174 13:19:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:55.174 00:08:55.174 real 0m11.883s 00:08:55.174 user 0m13.500s 00:08:55.174 sys 0m5.752s 00:08:55.174 13:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.174 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.174 ************************************ 00:08:55.174 END TEST nvmf_referrals 00:08:55.174 ************************************ 00:08:55.174 13:19:51 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:55.174 13:19:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:55.174 13:19:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.174 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:55.174 ************************************ 00:08:55.174 START TEST nvmf_connect_disconnect 00:08:55.174 ************************************ 00:08:55.174 13:19:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:55.174 * Looking for test storage... 00:08:55.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.174 13:19:51 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.174 13:19:51 -- nvmf/common.sh@7 -- # uname -s 00:08:55.174 13:19:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.174 13:19:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.174 13:19:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.174 13:19:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.174 13:19:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.174 13:19:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.174 13:19:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.174 13:19:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.174 13:19:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.174 13:19:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.174 13:19:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:55.174 13:19:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:55.174 13:19:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.174 13:19:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.174 13:19:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.174 13:19:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.174 13:19:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.174 13:19:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.174 13:19:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.174 13:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.174 13:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.174 13:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.174 13:19:51 -- paths/export.sh@5 -- # export PATH 00:08:55.174 13:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.174 13:19:51 -- nvmf/common.sh@46 -- # : 0 00:08:55.174 13:19:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:55.174 13:19:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:55.174 13:19:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:55.174 13:19:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.174 13:19:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.174 13:19:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:55.174 13:19:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:55.174 13:19:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:55.174 13:19:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.174 13:19:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:55.174 13:19:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:55.174 13:19:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:55.174 13:19:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.174 13:19:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:55.174 13:19:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:55.174 13:19:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:55.174 13:19:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.174 13:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.174 13:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.174 13:19:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:55.175 13:19:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:55.175 13:19:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:55.175 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:09:01.768 13:19:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:01.768 13:19:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:01.768 13:19:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:01.768 13:19:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:01.768 13:19:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:01.768 13:19:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:01.768 13:19:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:01.768 13:19:58 -- nvmf/common.sh@294 -- # net_devs=() 00:09:01.768 13:19:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:01.768 13:19:58 -- nvmf/common.sh@295 -- # e810=() 00:09:01.768 13:19:58 -- nvmf/common.sh@295 -- # local -ga e810 00:09:01.768 13:19:58 -- nvmf/common.sh@296 -- # x722=() 00:09:01.768 13:19:58 -- nvmf/common.sh@296 -- # local -ga x722 00:09:01.768 13:19:58 -- nvmf/common.sh@297 -- # mlx=() 00:09:01.768 13:19:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:01.768 13:19:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.768 13:19:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:01.768 13:19:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:01.768 13:19:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:01.768 13:19:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:01.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:01.768 13:19:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:01.768 13:19:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:01.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:01.768 13:19:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:01.768 13:19:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.768 13:19:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.768 13:19:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:01.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:01.768 13:19:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.768 13:19:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:01.768 13:19:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.768 13:19:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.768 13:19:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:01.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:01.768 13:19:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.768 13:19:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:01.768 13:19:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:01.768 13:19:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:01.768 13:19:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.768 13:19:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.768 13:19:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.768 13:19:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:01.768 13:19:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.768 13:19:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.768 13:19:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:01.768 13:19:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.768 13:19:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.768 13:19:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:01.768 13:19:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:01.768 13:19:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.768 13:19:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.768 13:19:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.768 13:19:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.768 13:19:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:01.768 13:19:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.768 13:19:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.768 13:19:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.768 13:19:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:01.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:09:01.768 00:09:01.768 --- 10.0.0.2 ping statistics --- 00:09:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.769 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:09:01.769 13:19:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.501 ms 00:09:01.769 00:09:01.769 --- 10.0.0.1 ping statistics --- 00:09:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.769 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:09:01.769 13:19:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.769 13:19:59 -- nvmf/common.sh@410 -- # return 0 00:09:01.769 13:19:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:01.769 13:19:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.769 13:19:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:01.769 13:19:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:01.769 13:19:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.769 13:19:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:01.769 13:19:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:01.769 13:19:59 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:01.769 13:19:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:01.769 13:19:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:01.769 13:19:59 -- common/autotest_common.sh@10 -- # set +x 00:09:01.769 13:19:59 -- nvmf/common.sh@469 -- # nvmfpid=802591 00:09:01.769 13:19:59 -- nvmf/common.sh@470 -- # waitforlisten 802591 00:09:01.769 13:19:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.769 13:19:59 -- common/autotest_common.sh@819 -- # '[' -z 802591 ']' 00:09:01.769 13:19:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.769 13:19:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.769 13:19:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.769 13:19:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.769 13:19:59 -- common/autotest_common.sh@10 -- # set +x 00:09:01.769 [2024-07-26 13:19:59.150644] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:01.769 [2024-07-26 13:19:59.150706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.769 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.769 [2024-07-26 13:19:59.223389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.030 [2024-07-26 13:19:59.261221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.030 [2024-07-26 13:19:59.261379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.030 [2024-07-26 13:19:59.261390] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.030 [2024-07-26 13:19:59.261397] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.030 [2024-07-26 13:19:59.261495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.030 [2024-07-26 13:19:59.261638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.030 [2024-07-26 13:19:59.261799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.030 [2024-07-26 13:19:59.261800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.603 13:19:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:02.603 13:19:59 -- common/autotest_common.sh@852 -- # return 0 00:09:02.603 13:19:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:02.603 13:19:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:02.603 13:19:59 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 13:19:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.603 13:19:59 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:02.603 13:19:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.603 13:19:59 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 [2024-07-26 13:19:59.974568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.603 13:19:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.603 13:19:59 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:02.603 13:19:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.603 13:19:59 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 13:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.603 13:20:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.603 13:20:00 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 13:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.603 13:20:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.603 13:20:00 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 13:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.603 13:20:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.603 13:20:00 -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 [2024-07-26 13:20:00.033918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.603 13:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:02.603 13:20:00 -- target/connect_disconnect.sh@34 -- # set +x 00:09:05.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.028 13:23:53 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:56.028 13:23:53 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:56.028 13:23:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.028 13:23:53 -- nvmf/common.sh@116 -- # sync 00:12:56.028 13:23:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.028 13:23:53 -- nvmf/common.sh@119 -- # set +e 00:12:56.028 13:23:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.028 13:23:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.028 rmmod nvme_tcp 00:12:56.028 rmmod nvme_fabrics 00:12:56.028 rmmod nvme_keyring 00:12:56.028 13:23:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.028 13:23:53 -- nvmf/common.sh@123 -- # set -e 00:12:56.028 13:23:53 -- nvmf/common.sh@124 -- # return 0 00:12:56.028 13:23:53 -- nvmf/common.sh@477 -- # '[' -n 802591 ']' 00:12:56.028 13:23:53 -- nvmf/common.sh@478 -- # killprocess 802591 00:12:56.028 13:23:53 -- common/autotest_common.sh@926 -- # '[' -z 802591 ']' 00:12:56.028 13:23:53 -- common/autotest_common.sh@930 -- # kill -0 802591 00:12:56.028 13:23:53 -- common/autotest_common.sh@931 -- # uname 00:12:56.028 13:23:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.028 13:23:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 802591 00:12:56.028 13:23:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:56.028 13:23:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:56.028 13:23:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 802591' 00:12:56.028 killing process with pid 802591 00:12:56.028 13:23:53 -- common/autotest_common.sh@945 -- # kill 802591 00:12:56.028 13:23:53 -- common/autotest_common.sh@950 -- # wait 802591 00:12:56.028 13:23:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.028 13:23:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.028 13:23:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.028 13:23:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.028 13:23:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.028 13:23:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.028 13:23:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.028 13:23:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.945 13:23:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:57.945 00:12:57.945 real 4m3.541s 00:12:57.945 user 15m30.599s 00:12:57.945 sys 0m21.501s 00:12:57.945 13:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.945 13:23:55 -- common/autotest_common.sh@10 -- # set +x 00:12:57.945 ************************************ 00:12:57.945 END TEST nvmf_connect_disconnect 00:12:57.945 ************************************ 00:12:57.945 13:23:55 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:57.945 13:23:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:57.945 13:23:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.945 13:23:55 -- common/autotest_common.sh@10 -- # set +x 00:12:57.945 ************************************ 00:12:57.945 START TEST nvmf_multitarget 00:12:57.945 ************************************ 00:12:57.945 13:23:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:58.207 * Looking for test storage... 00:12:58.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.207 13:23:55 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.207 13:23:55 -- nvmf/common.sh@7 -- # uname -s 00:12:58.207 13:23:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.207 13:23:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.207 13:23:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.207 13:23:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.207 13:23:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.207 13:23:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.207 13:23:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.207 13:23:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.207 13:23:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.207 13:23:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.207 13:23:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:58.207 13:23:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:58.207 13:23:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.207 13:23:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.207 13:23:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.207 13:23:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.207 13:23:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.207 13:23:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.207 13:23:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.207 13:23:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.207 13:23:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.207 13:23:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.207 13:23:55 -- paths/export.sh@5 -- # export PATH 00:12:58.207 13:23:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.207 13:23:55 -- nvmf/common.sh@46 -- # : 0 00:12:58.207 13:23:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.207 13:23:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.207 13:23:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.207 13:23:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.207 13:23:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.207 13:23:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.207 13:23:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.207 13:23:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.207 13:23:55 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:58.207 13:23:55 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:58.207 13:23:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.207 13:23:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.207 13:23:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.207 13:23:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.207 13:23:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.207 13:23:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.207 13:23:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.207 13:23:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.207 13:23:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:58.207 13:23:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:58.207 13:23:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:58.207 13:23:55 -- common/autotest_common.sh@10 -- # set +x 00:13:06.355 13:24:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.355 13:24:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:06.355 13:24:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:06.355 13:24:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:06.355 13:24:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:06.355 13:24:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:06.355 13:24:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:06.355 13:24:02 -- nvmf/common.sh@294 -- # net_devs=() 00:13:06.355 13:24:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:06.355 13:24:02 -- nvmf/common.sh@295 -- # e810=() 00:13:06.355 13:24:02 -- nvmf/common.sh@295 -- # local -ga e810 00:13:06.355 13:24:02 -- nvmf/common.sh@296 -- # x722=() 00:13:06.355 13:24:02 -- nvmf/common.sh@296 -- # local -ga x722 00:13:06.355 13:24:02 -- nvmf/common.sh@297 -- # mlx=() 00:13:06.355 13:24:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:06.355 13:24:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.355 13:24:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:06.355 13:24:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:06.355 13:24:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.355 13:24:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:06.355 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:06.355 13:24:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.355 13:24:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:06.355 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:06.355 13:24:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.355 13:24:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.355 13:24:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.355 13:24:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:06.355 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:06.355 13:24:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.355 13:24:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.355 13:24:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.355 13:24:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.355 13:24:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:06.355 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:06.355 13:24:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.355 13:24:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:06.355 13:24:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:06.355 13:24:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:06.355 13:24:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.355 13:24:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.355 13:24:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.355 13:24:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:06.355 13:24:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.355 13:24:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.355 13:24:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:06.355 13:24:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.355 13:24:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.355 13:24:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:06.355 13:24:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:06.355 13:24:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.355 13:24:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.355 13:24:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.355 13:24:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.355 13:24:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:06.355 13:24:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.355 13:24:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.355 13:24:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.355 13:24:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:06.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:13:06.355 00:13:06.355 --- 10.0.0.2 ping statistics --- 00:13:06.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.355 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:13:06.355 13:24:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:13:06.356 00:13:06.356 --- 10.0.0.1 ping statistics --- 00:13:06.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.356 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:13:06.356 13:24:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.356 13:24:02 -- nvmf/common.sh@410 -- # return 0 00:13:06.356 13:24:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:06.356 13:24:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.356 13:24:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:06.356 13:24:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:06.356 13:24:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.356 13:24:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:06.356 13:24:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:06.356 13:24:02 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:06.356 13:24:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:06.356 13:24:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:06.356 13:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:06.356 13:24:02 -- nvmf/common.sh@469 -- # nvmfpid=855058 00:13:06.356 13:24:02 -- nvmf/common.sh@470 -- # waitforlisten 855058 00:13:06.356 13:24:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.356 13:24:02 -- common/autotest_common.sh@819 -- # '[' -z 855058 ']' 00:13:06.356 13:24:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.356 13:24:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.356 13:24:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.356 13:24:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.356 13:24:02 -- common/autotest_common.sh@10 -- # set +x 00:13:06.356 [2024-07-26 13:24:02.764177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:06.356 [2024-07-26 13:24:02.764256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.356 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.356 [2024-07-26 13:24:02.835918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.356 [2024-07-26 13:24:02.874387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.356 [2024-07-26 13:24:02.874533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.356 [2024-07-26 13:24:02.874542] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.356 [2024-07-26 13:24:02.874550] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.356 [2024-07-26 13:24:02.874693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.356 [2024-07-26 13:24:02.874813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.356 [2024-07-26 13:24:02.874973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.356 [2024-07-26 13:24:02.874974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.356 13:24:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:06.356 13:24:03 -- common/autotest_common.sh@852 -- # return 0 00:13:06.356 13:24:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.356 13:24:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:06.356 13:24:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.356 13:24:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.356 13:24:03 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:06.356 13:24:03 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:06.356 13:24:03 -- target/multitarget.sh@21 -- # jq length 00:13:06.356 13:24:03 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:06.356 13:24:03 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:06.356 "nvmf_tgt_1" 00:13:06.356 13:24:03 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:06.617 "nvmf_tgt_2" 00:13:06.617 13:24:03 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:06.617 13:24:03 -- target/multitarget.sh@28 -- # jq length 00:13:06.617 13:24:03 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:06.617 13:24:03 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:06.617 true 00:13:06.617 13:24:04 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:06.877 true 00:13:06.877 13:24:04 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:06.877 13:24:04 -- target/multitarget.sh@35 -- # jq length 00:13:06.877 13:24:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:06.877 13:24:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:06.877 13:24:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:06.877 13:24:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.877 13:24:04 -- nvmf/common.sh@116 -- # sync 00:13:06.877 13:24:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.877 13:24:04 -- nvmf/common.sh@119 -- # set +e 00:13:06.877 13:24:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.877 13:24:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.877 rmmod nvme_tcp 00:13:06.877 rmmod nvme_fabrics 00:13:06.877 rmmod nvme_keyring 00:13:06.877 13:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.877 13:24:04 -- nvmf/common.sh@123 -- # set -e 00:13:06.877 13:24:04 -- nvmf/common.sh@124 -- # return 0 00:13:06.877 13:24:04 -- nvmf/common.sh@477 -- # '[' -n 855058 ']' 00:13:06.877 13:24:04 -- nvmf/common.sh@478 -- # killprocess 855058 00:13:06.877 13:24:04 -- common/autotest_common.sh@926 -- # '[' -z 855058 ']' 00:13:06.877 13:24:04 -- common/autotest_common.sh@930 -- # kill -0 855058 00:13:06.877 13:24:04 -- common/autotest_common.sh@931 -- # uname 00:13:06.877 13:24:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:06.877 13:24:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 855058 00:13:07.138 13:24:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:07.138 13:24:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:07.138 13:24:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 855058' 00:13:07.138 killing process with pid 855058 00:13:07.138 13:24:04 -- common/autotest_common.sh@945 -- # kill 855058 00:13:07.138 13:24:04 -- common/autotest_common.sh@950 -- # wait 855058 00:13:07.138 13:24:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:07.138 13:24:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:07.138 13:24:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:07.138 13:24:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.138 13:24:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:07.138 13:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.138 13:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.138 13:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.682 13:24:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:09.682 00:13:09.682 real 0m11.167s 00:13:09.682 user 0m9.230s 00:13:09.682 sys 0m5.770s 00:13:09.682 13:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.682 13:24:06 -- common/autotest_common.sh@10 -- # set +x 00:13:09.682 ************************************ 00:13:09.682 END TEST nvmf_multitarget 00:13:09.682 ************************************ 00:13:09.682 13:24:06 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:09.682 13:24:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.682 13:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.682 13:24:06 -- common/autotest_common.sh@10 -- # set +x 00:13:09.682 ************************************ 00:13:09.682 START TEST nvmf_rpc 00:13:09.682 ************************************ 00:13:09.682 13:24:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:09.682 * Looking for test storage... 00:13:09.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.682 13:24:06 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.682 13:24:06 -- nvmf/common.sh@7 -- # uname -s 00:13:09.682 13:24:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.682 13:24:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.682 13:24:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.682 13:24:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.682 13:24:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.682 13:24:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.682 13:24:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.682 13:24:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.682 13:24:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.682 13:24:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.682 13:24:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.682 13:24:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:09.682 13:24:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.682 13:24:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.682 13:24:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.682 13:24:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.682 13:24:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.682 13:24:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.682 13:24:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.682 13:24:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.682 13:24:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.682 13:24:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.682 13:24:06 -- paths/export.sh@5 -- # export PATH 00:13:09.682 13:24:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.682 13:24:06 -- nvmf/common.sh@46 -- # : 0 00:13:09.682 13:24:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.682 13:24:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.682 13:24:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.682 13:24:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.682 13:24:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.682 13:24:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.682 13:24:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.682 13:24:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.682 13:24:06 -- target/rpc.sh@11 -- # loops=5 00:13:09.682 13:24:06 -- target/rpc.sh@23 -- # nvmftestinit 00:13:09.682 13:24:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.682 13:24:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.682 13:24:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.682 13:24:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.682 13:24:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.682 13:24:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.682 13:24:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.682 13:24:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.682 13:24:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:09.682 13:24:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:09.682 13:24:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:09.682 13:24:06 -- common/autotest_common.sh@10 -- # set +x 00:13:16.280 13:24:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.280 13:24:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:16.280 13:24:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:16.280 13:24:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:16.280 13:24:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:16.280 13:24:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:16.280 13:24:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:16.280 13:24:13 -- nvmf/common.sh@294 -- # net_devs=() 00:13:16.280 13:24:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:16.280 13:24:13 -- nvmf/common.sh@295 -- # e810=() 00:13:16.280 13:24:13 -- nvmf/common.sh@295 -- # local -ga e810 00:13:16.280 13:24:13 -- nvmf/common.sh@296 -- # x722=() 00:13:16.280 13:24:13 -- nvmf/common.sh@296 -- # local -ga x722 00:13:16.280 13:24:13 -- nvmf/common.sh@297 -- # mlx=() 00:13:16.280 13:24:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:16.280 13:24:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.280 13:24:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:16.280 13:24:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:16.280 13:24:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.280 13:24:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:16.280 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:16.280 13:24:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.280 13:24:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:16.280 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:16.280 13:24:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.280 13:24:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.280 13:24:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.280 13:24:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:16.280 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:16.280 13:24:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.280 13:24:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.280 13:24:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.280 13:24:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.280 13:24:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:16.280 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:16.280 13:24:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.280 13:24:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:16.280 13:24:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:16.280 13:24:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:16.280 13:24:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.280 13:24:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.280 13:24:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.280 13:24:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:16.280 13:24:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.280 13:24:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.280 13:24:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:16.280 13:24:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.280 13:24:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.280 13:24:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:16.280 13:24:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:16.280 13:24:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.280 13:24:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.542 13:24:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.542 13:24:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.542 13:24:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:16.542 13:24:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.542 13:24:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.542 13:24:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.542 13:24:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:16.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:13:16.542 00:13:16.542 --- 10.0.0.2 ping statistics --- 00:13:16.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.542 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:13:16.542 13:24:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:13:16.542 00:13:16.542 --- 10.0.0.1 ping statistics --- 00:13:16.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.542 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:13:16.542 13:24:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.542 13:24:14 -- nvmf/common.sh@410 -- # return 0 00:13:16.542 13:24:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:16.803 13:24:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.803 13:24:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:16.803 13:24:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:16.803 13:24:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.803 13:24:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:16.803 13:24:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:16.803 13:24:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:16.803 13:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:16.803 13:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:16.803 13:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 13:24:14 -- nvmf/common.sh@469 -- # nvmfpid=860197 00:13:16.803 13:24:14 -- nvmf/common.sh@470 -- # waitforlisten 860197 00:13:16.803 13:24:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.803 13:24:14 -- common/autotest_common.sh@819 -- # '[' -z 860197 ']' 00:13:16.803 13:24:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.803 13:24:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.803 13:24:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.803 13:24:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.803 13:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:16.803 [2024-07-26 13:24:14.123496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:16.803 [2024-07-26 13:24:14.123550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.803 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.803 [2024-07-26 13:24:14.188722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.803 [2024-07-26 13:24:14.219131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.803 [2024-07-26 13:24:14.219267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.803 [2024-07-26 13:24:14.219278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.803 [2024-07-26 13:24:14.219287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.803 [2024-07-26 13:24:14.219504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.803 [2024-07-26 13:24:14.219628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.803 [2024-07-26 13:24:14.219783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.803 [2024-07-26 13:24:14.219784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.744 13:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:17.744 13:24:14 -- common/autotest_common.sh@852 -- # return 0 00:13:17.744 13:24:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:17.744 13:24:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:17.744 13:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:17.744 13:24:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.744 13:24:14 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:17.744 13:24:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.744 13:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:17.744 13:24:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.744 13:24:14 -- target/rpc.sh@26 -- # stats='{ 00:13:17.744 "tick_rate": 2400000000, 00:13:17.744 "poll_groups": [ 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_0", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_1", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_2", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_3", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [] 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 }' 00:13:17.744 13:24:14 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:17.744 13:24:14 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:17.744 13:24:14 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:17.744 13:24:14 -- target/rpc.sh@15 -- # wc -l 00:13:17.744 13:24:14 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:17.744 13:24:14 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:17.744 13:24:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:17.744 13:24:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.744 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.744 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.744 [2024-07-26 13:24:15.037876] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.744 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.744 13:24:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:17.744 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.744 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.744 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.744 13:24:15 -- target/rpc.sh@33 -- # stats='{ 00:13:17.744 "tick_rate": 2400000000, 00:13:17.744 "poll_groups": [ 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_0", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [ 00:13:17.744 { 00:13:17.744 "trtype": "TCP" 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_1", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [ 00:13:17.744 { 00:13:17.744 "trtype": "TCP" 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_2", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [ 00:13:17.744 { 00:13:17.744 "trtype": "TCP" 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 }, 00:13:17.744 { 00:13:17.744 "name": "nvmf_tgt_poll_group_3", 00:13:17.744 "admin_qpairs": 0, 00:13:17.744 "io_qpairs": 0, 00:13:17.744 "current_admin_qpairs": 0, 00:13:17.744 "current_io_qpairs": 0, 00:13:17.744 "pending_bdev_io": 0, 00:13:17.744 "completed_nvme_io": 0, 00:13:17.744 "transports": [ 00:13:17.744 { 00:13:17.744 "trtype": "TCP" 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 } 00:13:17.744 ] 00:13:17.744 }' 00:13:17.744 13:24:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.744 13:24:15 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:17.744 13:24:15 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:17.744 13:24:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.744 13:24:15 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:17.745 13:24:15 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:17.745 13:24:15 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:17.745 13:24:15 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:17.745 13:24:15 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:17.745 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.745 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 Malloc1 00:13:17.745 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.745 13:24:15 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.745 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.745 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.745 13:24:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.745 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.745 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.745 13:24:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:17.745 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.745 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.006 13:24:15 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.006 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.006 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 [2024-07-26 13:24:15.225753] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.006 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.006 13:24:15 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:18.006 13:24:15 -- common/autotest_common.sh@640 -- # local es=0 00:13:18.006 13:24:15 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:18.006 13:24:15 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:18.006 13:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.006 13:24:15 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:18.006 13:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.006 13:24:15 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:18.006 13:24:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:18.006 13:24:15 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:18.006 13:24:15 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:18.006 13:24:15 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:18.006 [2024-07-26 13:24:15.252659] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:18.006 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:18.006 could not add new controller: failed to write to nvme-fabrics device 00:13:18.006 13:24:15 -- common/autotest_common.sh@643 -- # es=1 00:13:18.006 13:24:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:18.006 13:24:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:18.006 13:24:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:18.006 13:24:15 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.006 13:24:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.006 13:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:18.006 13:24:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.006 13:24:15 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.392 13:24:16 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.392 13:24:16 -- common/autotest_common.sh@1177 -- # local i=0 00:13:19.392 13:24:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.392 13:24:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:19.392 13:24:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.417 13:24:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.417 13:24:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.417 13:24:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.417 13:24:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.417 13:24:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.417 13:24:18 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.417 13:24:18 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.678 13:24:18 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.678 13:24:18 -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.678 13:24:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:21.678 13:24:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.678 13:24:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:21.678 13:24:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.678 13:24:18 -- common/autotest_common.sh@1210 -- # return 0 00:13:21.678 13:24:18 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.678 13:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.678 13:24:18 -- common/autotest_common.sh@10 -- # set +x 00:13:21.678 13:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.678 13:24:18 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.678 13:24:18 -- common/autotest_common.sh@640 -- # local es=0 00:13:21.678 13:24:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.678 13:24:19 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:21.678 13:24:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.678 13:24:19 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:21.678 13:24:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.678 13:24:19 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:21.678 13:24:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:21.678 13:24:19 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:21.678 13:24:19 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:21.678 13:24:19 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.678 [2024-07-26 13:24:19.026109] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:21.678 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:21.678 could not add new controller: failed to write to nvme-fabrics device 00:13:21.678 13:24:19 -- common/autotest_common.sh@643 -- # es=1 00:13:21.678 13:24:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:21.678 13:24:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:21.678 13:24:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:21.678 13:24:19 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:21.678 13:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.678 13:24:19 -- common/autotest_common.sh@10 -- # set +x 00:13:21.678 13:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.678 13:24:19 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.594 13:24:20 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.594 13:24:20 -- common/autotest_common.sh@1177 -- # local i=0 00:13:23.594 13:24:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.594 13:24:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:23.594 13:24:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:25.510 13:24:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:25.510 13:24:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:25.510 13:24:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.510 13:24:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:25.510 13:24:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.510 13:24:22 -- common/autotest_common.sh@1187 -- # return 0 00:13:25.510 13:24:22 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.510 13:24:22 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.510 13:24:22 -- common/autotest_common.sh@1198 -- # local i=0 00:13:25.510 13:24:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:25.510 13:24:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.510 13:24:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:25.510 13:24:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.510 13:24:22 -- common/autotest_common.sh@1210 -- # return 0 00:13:25.510 13:24:22 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.510 13:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.510 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 13:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.510 13:24:22 -- target/rpc.sh@81 -- # seq 1 5 00:13:25.510 13:24:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:25.510 13:24:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.510 13:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.510 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 13:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.510 13:24:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.510 13:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.510 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 [2024-07-26 13:24:22.748860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.510 13:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.510 13:24:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:25.510 13:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.510 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 13:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.510 13:24:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.510 13:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.510 13:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 13:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.510 13:24:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.896 13:24:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.896 13:24:24 -- common/autotest_common.sh@1177 -- # local i=0 00:13:26.896 13:24:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.896 13:24:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:26.896 13:24:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:28.813 13:24:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:28.813 13:24:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:28.813 13:24:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.075 13:24:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:29.075 13:24:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.075 13:24:26 -- common/autotest_common.sh@1187 -- # return 0 00:13:29.075 13:24:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.075 13:24:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.075 13:24:26 -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.075 13:24:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:29.075 13:24:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.075 13:24:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:29.075 13:24:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.075 13:24:26 -- common/autotest_common.sh@1210 -- # return 0 00:13:29.075 13:24:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:29.075 13:24:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 [2024-07-26 13:24:26.467838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:29.075 13:24:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.075 13:24:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.075 13:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.075 13:24:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.991 13:24:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.991 13:24:27 -- common/autotest_common.sh@1177 -- # local i=0 00:13:30.991 13:24:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.991 13:24:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:30.991 13:24:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:32.905 13:24:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:32.905 13:24:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:32.905 13:24:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.905 13:24:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:32.905 13:24:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.905 13:24:30 -- common/autotest_common.sh@1187 -- # return 0 00:13:32.905 13:24:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.905 13:24:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.905 13:24:30 -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.905 13:24:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:32.905 13:24:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.905 13:24:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:32.905 13:24:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.905 13:24:30 -- common/autotest_common.sh@1210 -- # return 0 00:13:32.905 13:24:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.905 13:24:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 [2024-07-26 13:24:30.196566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.905 13:24:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.905 13:24:30 -- common/autotest_common.sh@10 -- # set +x 00:13:32.905 13:24:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.905 13:24:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.818 13:24:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.818 13:24:31 -- common/autotest_common.sh@1177 -- # local i=0 00:13:34.818 13:24:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.818 13:24:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:34.818 13:24:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:36.732 13:24:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:36.732 13:24:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:36.732 13:24:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.732 13:24:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:36.732 13:24:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.732 13:24:33 -- common/autotest_common.sh@1187 -- # return 0 00:13:36.732 13:24:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.732 13:24:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.732 13:24:33 -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.732 13:24:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:36.732 13:24:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.732 13:24:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:36.732 13:24:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.732 13:24:33 -- common/autotest_common.sh@1210 -- # return 0 00:13:36.732 13:24:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.732 13:24:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 [2024-07-26 13:24:33.963856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.732 13:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.732 13:24:33 -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 13:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.732 13:24:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.119 13:24:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.119 13:24:35 -- common/autotest_common.sh@1177 -- # local i=0 00:13:38.119 13:24:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.119 13:24:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:38.119 13:24:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:40.664 13:24:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:40.664 13:24:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:40.664 13:24:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.664 13:24:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:40.664 13:24:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.664 13:24:37 -- common/autotest_common.sh@1187 -- # return 0 00:13:40.664 13:24:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.664 13:24:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.664 13:24:37 -- common/autotest_common.sh@1198 -- # local i=0 00:13:40.664 13:24:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:40.664 13:24:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.664 13:24:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:40.664 13:24:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.664 13:24:37 -- common/autotest_common.sh@1210 -- # return 0 00:13:40.664 13:24:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.664 13:24:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 [2024-07-26 13:24:37.730501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.664 13:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.664 13:24:37 -- common/autotest_common.sh@10 -- # set +x 00:13:40.664 13:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.664 13:24:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.050 13:24:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.050 13:24:39 -- common/autotest_common.sh@1177 -- # local i=0 00:13:42.050 13:24:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.050 13:24:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:42.050 13:24:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:43.967 13:24:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:43.967 13:24:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:43.967 13:24:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.967 13:24:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:43.967 13:24:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.967 13:24:41 -- common/autotest_common.sh@1187 -- # return 0 00:13:43.967 13:24:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.967 13:24:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.967 13:24:41 -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.967 13:24:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:43.967 13:24:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.967 13:24:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:43.967 13:24:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.228 13:24:41 -- common/autotest_common.sh@1210 -- # return 0 00:13:44.228 13:24:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # seq 1 5 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.229 13:24:41 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 [2024-07-26 13:24:41.489618] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.229 13:24:41 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 [2024-07-26 13:24:41.545749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.229 13:24:41 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 [2024-07-26 13:24:41.605934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.229 13:24:41 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 [2024-07-26 13:24:41.662146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.229 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.229 13:24:41 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:44.229 13:24:41 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.229 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.229 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 [2024-07-26 13:24:41.718315] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:44.492 13:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.492 13:24:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 13:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.492 13:24:41 -- target/rpc.sh@110 -- # stats='{ 00:13:44.492 "tick_rate": 2400000000, 00:13:44.492 "poll_groups": [ 00:13:44.492 { 00:13:44.492 "name": "nvmf_tgt_poll_group_0", 00:13:44.492 "admin_qpairs": 0, 00:13:44.492 "io_qpairs": 224, 00:13:44.492 "current_admin_qpairs": 0, 00:13:44.492 "current_io_qpairs": 0, 00:13:44.492 "pending_bdev_io": 0, 00:13:44.492 "completed_nvme_io": 227, 00:13:44.492 "transports": [ 00:13:44.492 { 00:13:44.492 "trtype": "TCP" 00:13:44.492 } 00:13:44.492 ] 00:13:44.492 }, 00:13:44.492 { 00:13:44.492 "name": "nvmf_tgt_poll_group_1", 00:13:44.492 "admin_qpairs": 1, 00:13:44.492 "io_qpairs": 223, 00:13:44.492 "current_admin_qpairs": 0, 00:13:44.492 "current_io_qpairs": 0, 00:13:44.492 "pending_bdev_io": 0, 00:13:44.492 "completed_nvme_io": 324, 00:13:44.492 "transports": [ 00:13:44.492 { 00:13:44.492 "trtype": "TCP" 00:13:44.492 } 00:13:44.492 ] 00:13:44.492 }, 00:13:44.492 { 00:13:44.492 "name": "nvmf_tgt_poll_group_2", 00:13:44.492 "admin_qpairs": 6, 00:13:44.492 "io_qpairs": 218, 00:13:44.492 "current_admin_qpairs": 0, 00:13:44.492 "current_io_qpairs": 0, 00:13:44.492 "pending_bdev_io": 0, 00:13:44.492 "completed_nvme_io": 463, 00:13:44.492 "transports": [ 00:13:44.492 { 00:13:44.493 "trtype": "TCP" 00:13:44.493 } 00:13:44.493 ] 00:13:44.493 }, 00:13:44.493 { 00:13:44.493 "name": "nvmf_tgt_poll_group_3", 00:13:44.493 "admin_qpairs": 0, 00:13:44.493 "io_qpairs": 224, 00:13:44.493 "current_admin_qpairs": 0, 00:13:44.493 "current_io_qpairs": 0, 00:13:44.493 "pending_bdev_io": 0, 00:13:44.493 "completed_nvme_io": 225, 00:13:44.493 "transports": [ 00:13:44.493 { 00:13:44.493 "trtype": "TCP" 00:13:44.493 } 00:13:44.493 ] 00:13:44.493 } 00:13:44.493 ] 00:13:44.493 }' 00:13:44.493 13:24:41 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:44.493 13:24:41 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:44.493 13:24:41 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:44.493 13:24:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:44.493 13:24:41 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:44.493 13:24:41 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:44.493 13:24:41 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:44.493 13:24:41 -- target/rpc.sh@123 -- # nvmftestfini 00:13:44.493 13:24:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.493 13:24:41 -- nvmf/common.sh@116 -- # sync 00:13:44.493 13:24:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:44.493 13:24:41 -- nvmf/common.sh@119 -- # set +e 00:13:44.493 13:24:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:44.493 13:24:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:44.493 rmmod nvme_tcp 00:13:44.493 rmmod nvme_fabrics 00:13:44.493 rmmod nvme_keyring 00:13:44.493 13:24:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:44.493 13:24:41 -- nvmf/common.sh@123 -- # set -e 00:13:44.493 13:24:41 -- nvmf/common.sh@124 -- # return 0 00:13:44.493 13:24:41 -- nvmf/common.sh@477 -- # '[' -n 860197 ']' 00:13:44.493 13:24:41 -- nvmf/common.sh@478 -- # killprocess 860197 00:13:44.493 13:24:41 -- common/autotest_common.sh@926 -- # '[' -z 860197 ']' 00:13:44.493 13:24:41 -- common/autotest_common.sh@930 -- # kill -0 860197 00:13:44.493 13:24:41 -- common/autotest_common.sh@931 -- # uname 00:13:44.493 13:24:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.493 13:24:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 860197 00:13:44.754 13:24:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.754 13:24:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.754 13:24:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 860197' 00:13:44.754 killing process with pid 860197 00:13:44.754 13:24:42 -- common/autotest_common.sh@945 -- # kill 860197 00:13:44.754 13:24:42 -- common/autotest_common.sh@950 -- # wait 860197 00:13:44.754 13:24:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.754 13:24:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.754 13:24:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.754 13:24:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.754 13:24:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.754 13:24:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.754 13:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.754 13:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.302 13:24:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:47.302 00:13:47.302 real 0m37.585s 00:13:47.302 user 1m53.548s 00:13:47.302 sys 0m7.334s 00:13:47.302 13:24:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.302 13:24:44 -- common/autotest_common.sh@10 -- # set +x 00:13:47.302 ************************************ 00:13:47.302 END TEST nvmf_rpc 00:13:47.302 ************************************ 00:13:47.302 13:24:44 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:47.302 13:24:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:47.302 13:24:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.302 13:24:44 -- common/autotest_common.sh@10 -- # set +x 00:13:47.302 ************************************ 00:13:47.302 START TEST nvmf_invalid 00:13:47.302 ************************************ 00:13:47.302 13:24:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:47.302 * Looking for test storage... 00:13:47.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.302 13:24:44 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.302 13:24:44 -- nvmf/common.sh@7 -- # uname -s 00:13:47.302 13:24:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.302 13:24:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.302 13:24:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.302 13:24:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.302 13:24:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.302 13:24:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.302 13:24:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.302 13:24:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.302 13:24:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.302 13:24:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.302 13:24:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.302 13:24:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.302 13:24:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.302 13:24:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.302 13:24:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.302 13:24:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.302 13:24:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.302 13:24:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.302 13:24:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.302 13:24:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.302 13:24:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.302 13:24:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.302 13:24:44 -- paths/export.sh@5 -- # export PATH 00:13:47.302 13:24:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.302 13:24:44 -- nvmf/common.sh@46 -- # : 0 00:13:47.302 13:24:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.302 13:24:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.302 13:24:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.302 13:24:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.302 13:24:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.302 13:24:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.302 13:24:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.302 13:24:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.302 13:24:44 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:47.302 13:24:44 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.302 13:24:44 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:47.302 13:24:44 -- target/invalid.sh@14 -- # target=foobar 00:13:47.302 13:24:44 -- target/invalid.sh@16 -- # RANDOM=0 00:13:47.302 13:24:44 -- target/invalid.sh@34 -- # nvmftestinit 00:13:47.302 13:24:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.302 13:24:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.302 13:24:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.302 13:24:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.302 13:24:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.302 13:24:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.302 13:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.302 13:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.302 13:24:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:47.302 13:24:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:47.302 13:24:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:47.302 13:24:44 -- common/autotest_common.sh@10 -- # set +x 00:13:53.896 13:24:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.896 13:24:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:53.896 13:24:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:53.896 13:24:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:53.896 13:24:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:53.896 13:24:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:53.896 13:24:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:53.896 13:24:50 -- nvmf/common.sh@294 -- # net_devs=() 00:13:53.896 13:24:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:53.896 13:24:50 -- nvmf/common.sh@295 -- # e810=() 00:13:53.896 13:24:50 -- nvmf/common.sh@295 -- # local -ga e810 00:13:53.896 13:24:50 -- nvmf/common.sh@296 -- # x722=() 00:13:53.896 13:24:50 -- nvmf/common.sh@296 -- # local -ga x722 00:13:53.896 13:24:50 -- nvmf/common.sh@297 -- # mlx=() 00:13:53.896 13:24:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:53.896 13:24:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.896 13:24:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:53.896 13:24:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:53.896 13:24:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.896 13:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.896 13:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:53.896 13:24:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.896 13:24:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.896 13:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.896 13:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.896 13:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.896 13:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.896 13:24:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:53.896 13:24:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.896 13:24:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.896 13:24:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.896 13:24:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.896 13:24:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:53.896 13:24:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:53.896 13:24:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:53.896 13:24:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.896 13:24:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.896 13:24:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.896 13:24:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:53.896 13:24:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.896 13:24:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.896 13:24:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:53.896 13:24:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.896 13:24:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.896 13:24:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:53.896 13:24:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:53.896 13:24:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.896 13:24:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.896 13:24:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.896 13:24:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.896 13:24:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:53.896 13:24:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.896 13:24:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.896 13:24:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.896 13:24:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:53.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:13:53.896 00:13:53.896 --- 10.0.0.2 ping statistics --- 00:13:53.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.896 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:13:53.896 13:24:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:13:53.896 00:13:53.896 --- 10.0.0.1 ping statistics --- 00:13:53.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.896 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:13:53.896 13:24:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.896 13:24:51 -- nvmf/common.sh@410 -- # return 0 00:13:53.896 13:24:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:53.896 13:24:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.896 13:24:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:53.896 13:24:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:53.896 13:24:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.896 13:24:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:53.896 13:24:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:53.896 13:24:51 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:53.896 13:24:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:53.896 13:24:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:53.896 13:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.896 13:24:51 -- nvmf/common.sh@469 -- # nvmfpid=869962 00:13:53.896 13:24:51 -- nvmf/common.sh@470 -- # waitforlisten 869962 00:13:53.896 13:24:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.896 13:24:51 -- common/autotest_common.sh@819 -- # '[' -z 869962 ']' 00:13:53.896 13:24:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.896 13:24:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.896 13:24:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.896 13:24:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.896 13:24:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.896 [2024-07-26 13:24:51.352283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:53.896 [2024-07-26 13:24:51.352356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.157 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.157 [2024-07-26 13:24:51.426256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.157 [2024-07-26 13:24:51.463359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.157 [2024-07-26 13:24:51.463512] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.157 [2024-07-26 13:24:51.463523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.157 [2024-07-26 13:24:51.463531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.157 [2024-07-26 13:24:51.463675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.157 [2024-07-26 13:24:51.463790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.157 [2024-07-26 13:24:51.463950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.157 [2024-07-26 13:24:51.463951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.730 13:24:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.730 13:24:52 -- common/autotest_common.sh@852 -- # return 0 00:13:54.730 13:24:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:54.730 13:24:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:54.730 13:24:52 -- common/autotest_common.sh@10 -- # set +x 00:13:54.730 13:24:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.730 13:24:52 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:54.730 13:24:52 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21458 00:13:54.992 [2024-07-26 13:24:52.302786] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:54.992 13:24:52 -- target/invalid.sh@40 -- # out='request: 00:13:54.992 { 00:13:54.992 "nqn": "nqn.2016-06.io.spdk:cnode21458", 00:13:54.992 "tgt_name": "foobar", 00:13:54.992 "method": "nvmf_create_subsystem", 00:13:54.992 "req_id": 1 00:13:54.992 } 00:13:54.992 Got JSON-RPC error response 00:13:54.992 response: 00:13:54.992 { 00:13:54.992 "code": -32603, 00:13:54.992 "message": "Unable to find target foobar" 00:13:54.992 }' 00:13:54.992 13:24:52 -- target/invalid.sh@41 -- # [[ request: 00:13:54.992 { 00:13:54.992 "nqn": "nqn.2016-06.io.spdk:cnode21458", 00:13:54.992 "tgt_name": "foobar", 00:13:54.992 "method": "nvmf_create_subsystem", 00:13:54.992 "req_id": 1 00:13:54.992 } 00:13:54.992 Got JSON-RPC error response 00:13:54.992 response: 00:13:54.992 { 00:13:54.992 "code": -32603, 00:13:54.992 "message": "Unable to find target foobar" 00:13:54.992 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:54.992 13:24:52 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:54.992 13:24:52 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15419 00:13:55.254 [2024-07-26 13:24:52.475405] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15419: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:55.254 13:24:52 -- target/invalid.sh@45 -- # out='request: 00:13:55.254 { 00:13:55.254 "nqn": "nqn.2016-06.io.spdk:cnode15419", 00:13:55.254 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:55.254 "method": "nvmf_create_subsystem", 00:13:55.254 "req_id": 1 00:13:55.254 } 00:13:55.254 Got JSON-RPC error response 00:13:55.254 response: 00:13:55.254 { 00:13:55.254 "code": -32602, 00:13:55.254 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:55.254 }' 00:13:55.254 13:24:52 -- target/invalid.sh@46 -- # [[ request: 00:13:55.254 { 00:13:55.254 "nqn": "nqn.2016-06.io.spdk:cnode15419", 00:13:55.254 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:55.254 "method": "nvmf_create_subsystem", 00:13:55.254 "req_id": 1 00:13:55.254 } 00:13:55.254 Got JSON-RPC error response 00:13:55.254 response: 00:13:55.254 { 00:13:55.254 "code": -32602, 00:13:55.254 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:55.254 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:55.254 13:24:52 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:55.254 13:24:52 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12509 00:13:55.254 [2024-07-26 13:24:52.647953] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12509: invalid model number 'SPDK_Controller' 00:13:55.254 13:24:52 -- target/invalid.sh@50 -- # out='request: 00:13:55.254 { 00:13:55.254 "nqn": "nqn.2016-06.io.spdk:cnode12509", 00:13:55.254 "model_number": "SPDK_Controller\u001f", 00:13:55.254 "method": "nvmf_create_subsystem", 00:13:55.254 "req_id": 1 00:13:55.254 } 00:13:55.254 Got JSON-RPC error response 00:13:55.254 response: 00:13:55.254 { 00:13:55.254 "code": -32602, 00:13:55.254 "message": "Invalid MN SPDK_Controller\u001f" 00:13:55.254 }' 00:13:55.254 13:24:52 -- target/invalid.sh@51 -- # [[ request: 00:13:55.254 { 00:13:55.254 "nqn": "nqn.2016-06.io.spdk:cnode12509", 00:13:55.254 "model_number": "SPDK_Controller\u001f", 00:13:55.254 "method": "nvmf_create_subsystem", 00:13:55.254 "req_id": 1 00:13:55.254 } 00:13:55.254 Got JSON-RPC error response 00:13:55.254 response: 00:13:55.254 { 00:13:55.254 "code": -32602, 00:13:55.254 "message": "Invalid MN SPDK_Controller\u001f" 00:13:55.254 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:55.254 13:24:52 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:55.254 13:24:52 -- target/invalid.sh@19 -- # local length=21 ll 00:13:55.254 13:24:52 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:55.255 13:24:52 -- target/invalid.sh@21 -- # local chars 00:13:55.255 13:24:52 -- target/invalid.sh@22 -- # local string 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 78 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # string+=N 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 37 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # string+=% 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 71 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # string+=G 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 97 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # string+=a 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 57 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # string+=9 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.255 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # printf %x 100 00:13:55.255 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=d 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 84 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=T 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 40 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='(' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 94 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='^' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 117 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=u 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 101 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=e 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 112 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=p 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 91 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='[' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 76 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=L 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 74 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=J 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 38 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='&' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 77 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=M 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 124 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='|' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 76 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=L 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 40 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+='(' 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # printf %x 90 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:55.517 13:24:52 -- target/invalid.sh@25 -- # string+=Z 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.517 13:24:52 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.517 13:24:52 -- target/invalid.sh@28 -- # [[ N == \- ]] 00:13:55.517 13:24:52 -- target/invalid.sh@31 -- # echo 'N%Ga9dT(^uep[LJ&M|L(Z' 00:13:55.517 13:24:52 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'N%Ga9dT(^uep[LJ&M|L(Z' nqn.2016-06.io.spdk:cnode13551 00:13:55.517 [2024-07-26 13:24:52.977019] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13551: invalid serial number 'N%Ga9dT(^uep[LJ&M|L(Z' 00:13:55.780 13:24:53 -- target/invalid.sh@54 -- # out='request: 00:13:55.780 { 00:13:55.780 "nqn": "nqn.2016-06.io.spdk:cnode13551", 00:13:55.780 "serial_number": "N%Ga9dT(^uep[LJ&M|L(Z", 00:13:55.780 "method": "nvmf_create_subsystem", 00:13:55.780 "req_id": 1 00:13:55.780 } 00:13:55.780 Got JSON-RPC error response 00:13:55.780 response: 00:13:55.780 { 00:13:55.780 "code": -32602, 00:13:55.780 "message": "Invalid SN N%Ga9dT(^uep[LJ&M|L(Z" 00:13:55.780 }' 00:13:55.780 13:24:53 -- target/invalid.sh@55 -- # [[ request: 00:13:55.780 { 00:13:55.780 "nqn": "nqn.2016-06.io.spdk:cnode13551", 00:13:55.780 "serial_number": "N%Ga9dT(^uep[LJ&M|L(Z", 00:13:55.780 "method": "nvmf_create_subsystem", 00:13:55.780 "req_id": 1 00:13:55.780 } 00:13:55.780 Got JSON-RPC error response 00:13:55.780 response: 00:13:55.780 { 00:13:55.780 "code": -32602, 00:13:55.780 "message": "Invalid SN N%Ga9dT(^uep[LJ&M|L(Z" 00:13:55.780 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:55.780 13:24:53 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:55.780 13:24:53 -- target/invalid.sh@19 -- # local length=41 ll 00:13:55.780 13:24:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:55.780 13:24:53 -- target/invalid.sh@21 -- # local chars 00:13:55.780 13:24:53 -- target/invalid.sh@22 -- # local string 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 108 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=l 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 42 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='*' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 50 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=2 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 32 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=' ' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 58 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=: 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 64 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=@ 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 89 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=Y 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 36 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='$' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 62 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='>' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 111 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=o 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 123 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='{' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 56 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=8 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 40 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='(' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 49 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=1 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 88 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=X 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 121 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=y 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 57 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=9 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 106 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=j 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 36 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+='$' 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 115 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=s 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 116 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # string+=t 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.780 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # printf %x 84 00:13:55.780 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=T 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 81 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=Q 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 112 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=p 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 64 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=@ 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 117 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=u 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 122 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=z 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 55 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=7 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 41 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=')' 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 108 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=l 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 111 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=o 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 40 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+='(' 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.781 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # printf %x 86 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:55.781 13:24:53 -- target/invalid.sh@25 -- # string+=V 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 84 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=T 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 73 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=I 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 110 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=n 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 96 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+='`' 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 78 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=N 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 112 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=p 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 117 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=u 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # printf %x 119 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:56.042 13:24:53 -- target/invalid.sh@25 -- # string+=w 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:56.042 13:24:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:56.042 13:24:53 -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:56.042 13:24:53 -- target/invalid.sh@31 -- # echo 'l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw' 00:13:56.042 13:24:53 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw' nqn.2016-06.io.spdk:cnode27433 00:13:56.042 [2024-07-26 13:24:53.450556] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27433: invalid model number 'l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw' 00:13:56.042 13:24:53 -- target/invalid.sh@58 -- # out='request: 00:13:56.042 { 00:13:56.042 "nqn": "nqn.2016-06.io.spdk:cnode27433", 00:13:56.042 "model_number": "l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw", 00:13:56.042 "method": "nvmf_create_subsystem", 00:13:56.042 "req_id": 1 00:13:56.042 } 00:13:56.042 Got JSON-RPC error response 00:13:56.042 response: 00:13:56.042 { 00:13:56.042 "code": -32602, 00:13:56.042 "message": "Invalid MN l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw" 00:13:56.042 }' 00:13:56.042 13:24:53 -- target/invalid.sh@59 -- # [[ request: 00:13:56.042 { 00:13:56.042 "nqn": "nqn.2016-06.io.spdk:cnode27433", 00:13:56.042 "model_number": "l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw", 00:13:56.042 "method": "nvmf_create_subsystem", 00:13:56.042 "req_id": 1 00:13:56.042 } 00:13:56.042 Got JSON-RPC error response 00:13:56.042 response: 00:13:56.042 { 00:13:56.042 "code": -32602, 00:13:56.042 "message": "Invalid MN l*2 :@Y$>o{8(1Xy9j$stTQp@uz7)lo(VTIn`Npuw" 00:13:56.042 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:56.042 13:24:53 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:56.304 [2024-07-26 13:24:53.619212] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.304 13:24:53 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:56.566 13:24:53 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:56.566 13:24:53 -- target/invalid.sh@67 -- # echo '' 00:13:56.566 13:24:53 -- target/invalid.sh@67 -- # head -n 1 00:13:56.566 13:24:53 -- target/invalid.sh@67 -- # IP= 00:13:56.566 13:24:53 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:56.566 [2024-07-26 13:24:53.960331] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:56.566 13:24:53 -- target/invalid.sh@69 -- # out='request: 00:13:56.566 { 00:13:56.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:56.566 "listen_address": { 00:13:56.566 "trtype": "tcp", 00:13:56.566 "traddr": "", 00:13:56.566 "trsvcid": "4421" 00:13:56.566 }, 00:13:56.566 "method": "nvmf_subsystem_remove_listener", 00:13:56.566 "req_id": 1 00:13:56.566 } 00:13:56.566 Got JSON-RPC error response 00:13:56.566 response: 00:13:56.566 { 00:13:56.566 "code": -32602, 00:13:56.566 "message": "Invalid parameters" 00:13:56.566 }' 00:13:56.566 13:24:53 -- target/invalid.sh@70 -- # [[ request: 00:13:56.566 { 00:13:56.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:56.566 "listen_address": { 00:13:56.566 "trtype": "tcp", 00:13:56.566 "traddr": "", 00:13:56.566 "trsvcid": "4421" 00:13:56.566 }, 00:13:56.566 "method": "nvmf_subsystem_remove_listener", 00:13:56.566 "req_id": 1 00:13:56.566 } 00:13:56.566 Got JSON-RPC error response 00:13:56.566 response: 00:13:56.566 { 00:13:56.566 "code": -32602, 00:13:56.566 "message": "Invalid parameters" 00:13:56.566 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:56.566 13:24:53 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11615 -i 0 00:13:56.828 [2024-07-26 13:24:54.128819] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11615: invalid cntlid range [0-65519] 00:13:56.828 13:24:54 -- target/invalid.sh@73 -- # out='request: 00:13:56.828 { 00:13:56.828 "nqn": "nqn.2016-06.io.spdk:cnode11615", 00:13:56.828 "min_cntlid": 0, 00:13:56.828 "method": "nvmf_create_subsystem", 00:13:56.828 "req_id": 1 00:13:56.828 } 00:13:56.828 Got JSON-RPC error response 00:13:56.828 response: 00:13:56.828 { 00:13:56.828 "code": -32602, 00:13:56.828 "message": "Invalid cntlid range [0-65519]" 00:13:56.828 }' 00:13:56.828 13:24:54 -- target/invalid.sh@74 -- # [[ request: 00:13:56.828 { 00:13:56.828 "nqn": "nqn.2016-06.io.spdk:cnode11615", 00:13:56.828 "min_cntlid": 0, 00:13:56.828 "method": "nvmf_create_subsystem", 00:13:56.828 "req_id": 1 00:13:56.828 } 00:13:56.828 Got JSON-RPC error response 00:13:56.828 response: 00:13:56.828 { 00:13:56.828 "code": -32602, 00:13:56.828 "message": "Invalid cntlid range [0-65519]" 00:13:56.828 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:56.828 13:24:54 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6847 -i 65520 00:13:56.828 [2024-07-26 13:24:54.289365] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6847: invalid cntlid range [65520-65519] 00:13:57.088 13:24:54 -- target/invalid.sh@75 -- # out='request: 00:13:57.088 { 00:13:57.088 "nqn": "nqn.2016-06.io.spdk:cnode6847", 00:13:57.088 "min_cntlid": 65520, 00:13:57.088 "method": "nvmf_create_subsystem", 00:13:57.088 "req_id": 1 00:13:57.088 } 00:13:57.088 Got JSON-RPC error response 00:13:57.088 response: 00:13:57.088 { 00:13:57.088 "code": -32602, 00:13:57.088 "message": "Invalid cntlid range [65520-65519]" 00:13:57.088 }' 00:13:57.088 13:24:54 -- target/invalid.sh@76 -- # [[ request: 00:13:57.088 { 00:13:57.088 "nqn": "nqn.2016-06.io.spdk:cnode6847", 00:13:57.088 "min_cntlid": 65520, 00:13:57.088 "method": "nvmf_create_subsystem", 00:13:57.088 "req_id": 1 00:13:57.088 } 00:13:57.088 Got JSON-RPC error response 00:13:57.088 response: 00:13:57.088 { 00:13:57.088 "code": -32602, 00:13:57.088 "message": "Invalid cntlid range [65520-65519]" 00:13:57.088 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:57.088 13:24:54 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29412 -I 0 00:13:57.088 [2024-07-26 13:24:54.457906] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29412: invalid cntlid range [1-0] 00:13:57.088 13:24:54 -- target/invalid.sh@77 -- # out='request: 00:13:57.088 { 00:13:57.088 "nqn": "nqn.2016-06.io.spdk:cnode29412", 00:13:57.088 "max_cntlid": 0, 00:13:57.088 "method": "nvmf_create_subsystem", 00:13:57.088 "req_id": 1 00:13:57.088 } 00:13:57.088 Got JSON-RPC error response 00:13:57.088 response: 00:13:57.088 { 00:13:57.088 "code": -32602, 00:13:57.088 "message": "Invalid cntlid range [1-0]" 00:13:57.088 }' 00:13:57.088 13:24:54 -- target/invalid.sh@78 -- # [[ request: 00:13:57.088 { 00:13:57.088 "nqn": "nqn.2016-06.io.spdk:cnode29412", 00:13:57.088 "max_cntlid": 0, 00:13:57.088 "method": "nvmf_create_subsystem", 00:13:57.088 "req_id": 1 00:13:57.088 } 00:13:57.088 Got JSON-RPC error response 00:13:57.088 response: 00:13:57.088 { 00:13:57.088 "code": -32602, 00:13:57.088 "message": "Invalid cntlid range [1-0]" 00:13:57.088 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:57.088 13:24:54 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31863 -I 65520 00:13:57.349 [2024-07-26 13:24:54.618431] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31863: invalid cntlid range [1-65520] 00:13:57.349 13:24:54 -- target/invalid.sh@79 -- # out='request: 00:13:57.349 { 00:13:57.349 "nqn": "nqn.2016-06.io.spdk:cnode31863", 00:13:57.349 "max_cntlid": 65520, 00:13:57.349 "method": "nvmf_create_subsystem", 00:13:57.349 "req_id": 1 00:13:57.349 } 00:13:57.349 Got JSON-RPC error response 00:13:57.349 response: 00:13:57.349 { 00:13:57.349 "code": -32602, 00:13:57.349 "message": "Invalid cntlid range [1-65520]" 00:13:57.349 }' 00:13:57.349 13:24:54 -- target/invalid.sh@80 -- # [[ request: 00:13:57.349 { 00:13:57.349 "nqn": "nqn.2016-06.io.spdk:cnode31863", 00:13:57.349 "max_cntlid": 65520, 00:13:57.349 "method": "nvmf_create_subsystem", 00:13:57.349 "req_id": 1 00:13:57.349 } 00:13:57.349 Got JSON-RPC error response 00:13:57.349 response: 00:13:57.349 { 00:13:57.349 "code": -32602, 00:13:57.349 "message": "Invalid cntlid range [1-65520]" 00:13:57.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:57.349 13:24:54 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31012 -i 6 -I 5 00:13:57.349 [2024-07-26 13:24:54.786991] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31012: invalid cntlid range [6-5] 00:13:57.349 13:24:54 -- target/invalid.sh@83 -- # out='request: 00:13:57.349 { 00:13:57.349 "nqn": "nqn.2016-06.io.spdk:cnode31012", 00:13:57.349 "min_cntlid": 6, 00:13:57.349 "max_cntlid": 5, 00:13:57.349 "method": "nvmf_create_subsystem", 00:13:57.349 "req_id": 1 00:13:57.349 } 00:13:57.349 Got JSON-RPC error response 00:13:57.349 response: 00:13:57.349 { 00:13:57.349 "code": -32602, 00:13:57.349 "message": "Invalid cntlid range [6-5]" 00:13:57.349 }' 00:13:57.349 13:24:54 -- target/invalid.sh@84 -- # [[ request: 00:13:57.349 { 00:13:57.349 "nqn": "nqn.2016-06.io.spdk:cnode31012", 00:13:57.349 "min_cntlid": 6, 00:13:57.349 "max_cntlid": 5, 00:13:57.349 "method": "nvmf_create_subsystem", 00:13:57.349 "req_id": 1 00:13:57.349 } 00:13:57.349 Got JSON-RPC error response 00:13:57.349 response: 00:13:57.349 { 00:13:57.350 "code": -32602, 00:13:57.350 "message": "Invalid cntlid range [6-5]" 00:13:57.350 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:57.350 13:24:54 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:57.611 13:24:54 -- target/invalid.sh@87 -- # out='request: 00:13:57.611 { 00:13:57.611 "name": "foobar", 00:13:57.611 "method": "nvmf_delete_target", 00:13:57.611 "req_id": 1 00:13:57.611 } 00:13:57.611 Got JSON-RPC error response 00:13:57.611 response: 00:13:57.611 { 00:13:57.611 "code": -32602, 00:13:57.611 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:57.611 }' 00:13:57.611 13:24:54 -- target/invalid.sh@88 -- # [[ request: 00:13:57.611 { 00:13:57.611 "name": "foobar", 00:13:57.611 "method": "nvmf_delete_target", 00:13:57.611 "req_id": 1 00:13:57.611 } 00:13:57.611 Got JSON-RPC error response 00:13:57.611 response: 00:13:57.611 { 00:13:57.611 "code": -32602, 00:13:57.611 "message": "The specified target doesn't exist, cannot delete it." 00:13:57.611 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:57.611 13:24:54 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:57.611 13:24:54 -- target/invalid.sh@91 -- # nvmftestfini 00:13:57.611 13:24:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:57.611 13:24:54 -- nvmf/common.sh@116 -- # sync 00:13:57.611 13:24:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:57.611 13:24:54 -- nvmf/common.sh@119 -- # set +e 00:13:57.611 13:24:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:57.611 13:24:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:57.611 rmmod nvme_tcp 00:13:57.611 rmmod nvme_fabrics 00:13:57.611 rmmod nvme_keyring 00:13:57.611 13:24:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:57.611 13:24:54 -- nvmf/common.sh@123 -- # set -e 00:13:57.611 13:24:54 -- nvmf/common.sh@124 -- # return 0 00:13:57.611 13:24:54 -- nvmf/common.sh@477 -- # '[' -n 869962 ']' 00:13:57.611 13:24:54 -- nvmf/common.sh@478 -- # killprocess 869962 00:13:57.611 13:24:54 -- common/autotest_common.sh@926 -- # '[' -z 869962 ']' 00:13:57.611 13:24:54 -- common/autotest_common.sh@930 -- # kill -0 869962 00:13:57.611 13:24:54 -- common/autotest_common.sh@931 -- # uname 00:13:57.611 13:24:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:57.611 13:24:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 869962 00:13:57.611 13:24:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:57.611 13:24:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:57.611 13:24:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 869962' 00:13:57.611 killing process with pid 869962 00:13:57.611 13:24:55 -- common/autotest_common.sh@945 -- # kill 869962 00:13:57.611 13:24:55 -- common/autotest_common.sh@950 -- # wait 869962 00:13:57.872 13:24:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:57.872 13:24:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:57.872 13:24:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:57.872 13:24:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.872 13:24:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:57.872 13:24:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.872 13:24:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.872 13:24:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.878 13:24:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:59.878 00:13:59.878 real 0m12.991s 00:13:59.878 user 0m18.806s 00:13:59.878 sys 0m6.052s 00:13:59.878 13:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.878 13:24:57 -- common/autotest_common.sh@10 -- # set +x 00:13:59.878 ************************************ 00:13:59.878 END TEST nvmf_invalid 00:13:59.878 ************************************ 00:13:59.878 13:24:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:59.878 13:24:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:59.878 13:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.878 13:24:57 -- common/autotest_common.sh@10 -- # set +x 00:13:59.878 ************************************ 00:13:59.878 START TEST nvmf_abort 00:13:59.878 ************************************ 00:13:59.878 13:24:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:00.138 * Looking for test storage... 00:14:00.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.138 13:24:57 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.138 13:24:57 -- nvmf/common.sh@7 -- # uname -s 00:14:00.138 13:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.138 13:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.138 13:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.138 13:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.138 13:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.138 13:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.138 13:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.138 13:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.138 13:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.138 13:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.138 13:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.138 13:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:00.138 13:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.138 13:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.138 13:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.138 13:24:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.138 13:24:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.138 13:24:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.138 13:24:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.138 13:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.138 13:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.139 13:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.139 13:24:57 -- paths/export.sh@5 -- # export PATH 00:14:00.139 13:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.139 13:24:57 -- nvmf/common.sh@46 -- # : 0 00:14:00.139 13:24:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.139 13:24:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.139 13:24:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.139 13:24:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.139 13:24:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.139 13:24:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.139 13:24:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.139 13:24:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.139 13:24:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.139 13:24:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:00.139 13:24:57 -- target/abort.sh@14 -- # nvmftestinit 00:14:00.139 13:24:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.139 13:24:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.139 13:24:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.139 13:24:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.139 13:24:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.139 13:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.139 13:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.139 13:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.139 13:24:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:00.139 13:24:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:00.139 13:24:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:00.139 13:24:57 -- common/autotest_common.sh@10 -- # set +x 00:14:06.774 13:25:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.774 13:25:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:06.774 13:25:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:06.774 13:25:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:06.774 13:25:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:06.774 13:25:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:06.774 13:25:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:06.774 13:25:04 -- nvmf/common.sh@294 -- # net_devs=() 00:14:06.774 13:25:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:06.774 13:25:04 -- nvmf/common.sh@295 -- # e810=() 00:14:06.774 13:25:04 -- nvmf/common.sh@295 -- # local -ga e810 00:14:06.774 13:25:04 -- nvmf/common.sh@296 -- # x722=() 00:14:06.774 13:25:04 -- nvmf/common.sh@296 -- # local -ga x722 00:14:06.774 13:25:04 -- nvmf/common.sh@297 -- # mlx=() 00:14:06.774 13:25:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:06.774 13:25:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.774 13:25:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:06.774 13:25:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:06.774 13:25:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:06.774 13:25:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.774 13:25:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:06.774 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:06.774 13:25:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.774 13:25:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:06.774 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:06.774 13:25:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:06.774 13:25:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.774 13:25:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.774 13:25:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.774 13:25:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.774 13:25:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:06.774 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:06.774 13:25:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.774 13:25:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.774 13:25:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.774 13:25:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.774 13:25:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.774 13:25:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:06.774 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:06.774 13:25:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.774 13:25:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:06.774 13:25:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:06.774 13:25:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:06.774 13:25:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:06.774 13:25:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.774 13:25:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.774 13:25:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.774 13:25:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:06.775 13:25:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.775 13:25:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.775 13:25:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:06.775 13:25:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.775 13:25:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.775 13:25:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:06.775 13:25:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:06.775 13:25:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.775 13:25:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.775 13:25:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.775 13:25:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.036 13:25:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:07.036 13:25:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.036 13:25:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.036 13:25:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.036 13:25:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:07.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:14:07.036 00:14:07.036 --- 10.0.0.2 ping statistics --- 00:14:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.036 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:14:07.036 13:25:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:14:07.036 00:14:07.036 --- 10.0.0.1 ping statistics --- 00:14:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.036 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:14:07.036 13:25:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.036 13:25:04 -- nvmf/common.sh@410 -- # return 0 00:14:07.036 13:25:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:07.036 13:25:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.036 13:25:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:07.036 13:25:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:07.036 13:25:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.036 13:25:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:07.036 13:25:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:07.036 13:25:04 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:07.036 13:25:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:07.036 13:25:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:07.036 13:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:07.036 13:25:04 -- nvmf/common.sh@469 -- # nvmfpid=875003 00:14:07.036 13:25:04 -- nvmf/common.sh@470 -- # waitforlisten 875003 00:14:07.036 13:25:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.036 13:25:04 -- common/autotest_common.sh@819 -- # '[' -z 875003 ']' 00:14:07.036 13:25:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.036 13:25:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:07.036 13:25:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.036 13:25:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:07.036 13:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:07.036 [2024-07-26 13:25:04.508004] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:07.036 [2024-07-26 13:25:04.508069] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.298 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.298 [2024-07-26 13:25:04.597349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.298 [2024-07-26 13:25:04.642297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:07.298 [2024-07-26 13:25:04.642473] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.298 [2024-07-26 13:25:04.642486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.298 [2024-07-26 13:25:04.642496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.298 [2024-07-26 13:25:04.642626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.298 [2024-07-26 13:25:04.642795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.298 [2024-07-26 13:25:04.642796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.870 13:25:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.870 13:25:05 -- common/autotest_common.sh@852 -- # return 0 00:14:07.870 13:25:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.870 13:25:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.870 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:07.870 13:25:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.870 13:25:05 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:07.870 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.870 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:07.870 [2024-07-26 13:25:05.314092] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.870 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.870 13:25:05 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:07.870 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.870 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 Malloc0 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:08.131 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.131 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 Delay0 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:08.131 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.131 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:08.131 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.131 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:08.131 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.131 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 [2024-07-26 13:25:05.399825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.131 13:25:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.131 13:25:05 -- common/autotest_common.sh@10 -- # set +x 00:14:08.131 13:25:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.131 13:25:05 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:08.131 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.132 [2024-07-26 13:25:05.562349] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:10.681 Initializing NVMe Controllers 00:14:10.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:10.681 controller IO queue size 128 less than required 00:14:10.681 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:10.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:10.681 Initialization complete. Launching workers. 00:14:10.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 27266 00:14:10.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27330, failed to submit 62 00:14:10.681 success 27266, unsuccess 64, failed 0 00:14:10.681 13:25:07 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:10.681 13:25:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.681 13:25:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.681 13:25:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.681 13:25:07 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:10.681 13:25:07 -- target/abort.sh@38 -- # nvmftestfini 00:14:10.681 13:25:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.681 13:25:07 -- nvmf/common.sh@116 -- # sync 00:14:10.681 13:25:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.681 13:25:07 -- nvmf/common.sh@119 -- # set +e 00:14:10.681 13:25:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.681 13:25:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.681 rmmod nvme_tcp 00:14:10.681 rmmod nvme_fabrics 00:14:10.681 rmmod nvme_keyring 00:14:10.681 13:25:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.681 13:25:07 -- nvmf/common.sh@123 -- # set -e 00:14:10.681 13:25:07 -- nvmf/common.sh@124 -- # return 0 00:14:10.681 13:25:07 -- nvmf/common.sh@477 -- # '[' -n 875003 ']' 00:14:10.681 13:25:07 -- nvmf/common.sh@478 -- # killprocess 875003 00:14:10.681 13:25:07 -- common/autotest_common.sh@926 -- # '[' -z 875003 ']' 00:14:10.681 13:25:07 -- common/autotest_common.sh@930 -- # kill -0 875003 00:14:10.681 13:25:07 -- common/autotest_common.sh@931 -- # uname 00:14:10.681 13:25:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.681 13:25:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 875003 00:14:10.681 13:25:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:10.681 13:25:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:10.681 13:25:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 875003' 00:14:10.681 killing process with pid 875003 00:14:10.681 13:25:07 -- common/autotest_common.sh@945 -- # kill 875003 00:14:10.681 13:25:07 -- common/autotest_common.sh@950 -- # wait 875003 00:14:10.681 13:25:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.681 13:25:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.681 13:25:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.681 13:25:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.681 13:25:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.681 13:25:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.681 13:25:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.681 13:25:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.597 13:25:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:12.597 00:14:12.597 real 0m12.695s 00:14:12.597 user 0m13.501s 00:14:12.597 sys 0m6.145s 00:14:12.597 13:25:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.597 13:25:09 -- common/autotest_common.sh@10 -- # set +x 00:14:12.597 ************************************ 00:14:12.597 END TEST nvmf_abort 00:14:12.597 ************************************ 00:14:12.597 13:25:10 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:12.597 13:25:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:12.597 13:25:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:12.597 13:25:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.597 ************************************ 00:14:12.597 START TEST nvmf_ns_hotplug_stress 00:14:12.597 ************************************ 00:14:12.597 13:25:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:12.859 * Looking for test storage... 00:14:12.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.859 13:25:10 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.859 13:25:10 -- nvmf/common.sh@7 -- # uname -s 00:14:12.859 13:25:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.859 13:25:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.859 13:25:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.859 13:25:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.859 13:25:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.859 13:25:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.859 13:25:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.859 13:25:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.859 13:25:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.859 13:25:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.859 13:25:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.859 13:25:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.859 13:25:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.859 13:25:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.859 13:25:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.859 13:25:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.859 13:25:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.859 13:25:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.859 13:25:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.859 13:25:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.859 13:25:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.859 13:25:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.859 13:25:10 -- paths/export.sh@5 -- # export PATH 00:14:12.860 13:25:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.860 13:25:10 -- nvmf/common.sh@46 -- # : 0 00:14:12.860 13:25:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.860 13:25:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.860 13:25:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.860 13:25:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.860 13:25:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.860 13:25:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.860 13:25:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.860 13:25:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.860 13:25:10 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.860 13:25:10 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:12.860 13:25:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.860 13:25:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.860 13:25:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.860 13:25:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.860 13:25:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.860 13:25:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.860 13:25:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.860 13:25:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.860 13:25:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:12.860 13:25:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:12.860 13:25:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:12.860 13:25:10 -- common/autotest_common.sh@10 -- # set +x 00:14:21.007 13:25:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.007 13:25:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:21.007 13:25:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:21.007 13:25:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:21.007 13:25:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:21.007 13:25:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:21.007 13:25:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:21.007 13:25:16 -- nvmf/common.sh@294 -- # net_devs=() 00:14:21.007 13:25:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:21.007 13:25:16 -- nvmf/common.sh@295 -- # e810=() 00:14:21.007 13:25:16 -- nvmf/common.sh@295 -- # local -ga e810 00:14:21.007 13:25:16 -- nvmf/common.sh@296 -- # x722=() 00:14:21.007 13:25:16 -- nvmf/common.sh@296 -- # local -ga x722 00:14:21.007 13:25:16 -- nvmf/common.sh@297 -- # mlx=() 00:14:21.007 13:25:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:21.007 13:25:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.007 13:25:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.007 13:25:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.007 13:25:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.007 13:25:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.007 13:25:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.007 13:25:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.007 13:25:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:21.007 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:21.007 13:25:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:21.007 13:25:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:21.007 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:21.007 13:25:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.007 13:25:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.007 13:25:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.007 13:25:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:21.007 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:21.007 13:25:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:21.007 13:25:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.007 13:25:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.007 13:25:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:21.007 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:21.007 13:25:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:21.007 13:25:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:21.007 13:25:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.007 13:25:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.007 13:25:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:21.007 13:25:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.007 13:25:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.007 13:25:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:21.007 13:25:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.007 13:25:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.007 13:25:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:21.007 13:25:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:21.007 13:25:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.007 13:25:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.007 13:25:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.007 13:25:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.007 13:25:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:21.007 13:25:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.007 13:25:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.007 13:25:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.007 13:25:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:21.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:14:21.007 00:14:21.007 --- 10.0.0.2 ping statistics --- 00:14:21.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.007 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:14:21.007 13:25:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.479 ms 00:14:21.007 00:14:21.007 --- 10.0.0.1 ping statistics --- 00:14:21.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.007 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:14:21.007 13:25:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.007 13:25:17 -- nvmf/common.sh@410 -- # return 0 00:14:21.007 13:25:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.007 13:25:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.007 13:25:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.007 13:25:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.007 13:25:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.007 13:25:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.007 13:25:17 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:21.007 13:25:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.007 13:25:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.007 13:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:21.007 13:25:17 -- nvmf/common.sh@469 -- # nvmfpid=879856 00:14:21.007 13:25:17 -- nvmf/common.sh@470 -- # waitforlisten 879856 00:14:21.007 13:25:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:21.007 13:25:17 -- common/autotest_common.sh@819 -- # '[' -z 879856 ']' 00:14:21.007 13:25:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.008 13:25:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.008 13:25:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.008 13:25:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.008 13:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:21.008 [2024-07-26 13:25:17.458800] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:21.008 [2024-07-26 13:25:17.458870] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.008 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.008 [2024-07-26 13:25:17.546581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.008 [2024-07-26 13:25:17.593370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.008 [2024-07-26 13:25:17.593524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.008 [2024-07-26 13:25:17.593536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.008 [2024-07-26 13:25:17.593547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.008 [2024-07-26 13:25:17.593693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.008 [2024-07-26 13:25:17.593862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.008 [2024-07-26 13:25:17.593863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.008 13:25:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:21.008 13:25:18 -- common/autotest_common.sh@852 -- # return 0 00:14:21.008 13:25:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:21.008 13:25:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:21.008 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:14:21.008 13:25:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.008 13:25:18 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:21.008 13:25:18 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.008 [2024-07-26 13:25:18.404797] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.008 13:25:18 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.269 13:25:18 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.269 [2024-07-26 13:25:18.738235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.529 13:25:18 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.529 13:25:18 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:21.790 Malloc0 00:14:21.790 13:25:19 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:21.790 Delay0 00:14:22.051 13:25:19 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.051 13:25:19 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:22.312 NULL1 00:14:22.312 13:25:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:22.312 13:25:19 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=880422 00:14:22.312 13:25:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:22.312 13:25:19 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:22.312 13:25:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.608 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.608 13:25:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.872 13:25:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:22.872 13:25:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:22.872 [2024-07-26 13:25:20.243888] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:14:22.872 true 00:14:22.872 13:25:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:22.872 13:25:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.133 13:25:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.133 13:25:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:23.133 13:25:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:23.394 true 00:14:23.394 13:25:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:23.394 13:25:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.655 13:25:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.655 13:25:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:23.655 13:25:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:23.915 true 00:14:23.915 13:25:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:23.915 13:25:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.174 13:25:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.174 13:25:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:24.174 13:25:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:24.435 true 00:14:24.435 13:25:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:24.435 13:25:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.435 13:25:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.695 13:25:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:24.695 13:25:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:24.954 true 00:14:24.954 13:25:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:24.954 13:25:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.954 13:25:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.215 13:25:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:25.215 13:25:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:25.476 true 00:14:25.476 13:25:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:25.476 13:25:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.476 13:25:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.737 13:25:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:25.737 13:25:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:25.737 true 00:14:25.996 13:25:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:25.996 13:25:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.996 13:25:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.256 13:25:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:26.256 13:25:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:26.256 true 00:14:26.256 13:25:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:26.256 13:25:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.515 13:25:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.775 13:25:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:26.775 13:25:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:26.775 true 00:14:26.775 13:25:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:26.775 13:25:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.037 13:25:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.298 13:25:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:27.298 13:25:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:27.298 true 00:14:27.298 13:25:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:27.298 13:25:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.559 13:25:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.559 13:25:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:27.559 13:25:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:27.820 true 00:14:27.820 13:25:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:27.820 13:25:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.820 13:25:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.081 13:25:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:28.081 13:25:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:28.341 true 00:14:28.341 13:25:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:28.341 13:25:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.341 13:25:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.601 13:25:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:28.601 13:25:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:28.862 true 00:14:28.862 13:25:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:28.862 13:25:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.862 13:25:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.124 13:25:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:29.124 13:25:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:29.124 true 00:14:29.386 13:25:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:29.386 13:25:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.386 13:25:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.647 13:25:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:29.647 13:25:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:29.647 true 00:14:29.647 13:25:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:29.647 13:25:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.907 13:25:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.168 13:25:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:30.168 13:25:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:30.168 true 00:14:30.168 13:25:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:30.168 13:25:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.429 13:25:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.691 13:25:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:30.691 13:25:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:30.691 true 00:14:30.691 13:25:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:30.691 13:25:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.952 13:25:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.952 13:25:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:30.952 13:25:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:31.213 true 00:14:31.213 13:25:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:31.213 13:25:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.475 13:25:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.475 13:25:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:31.475 13:25:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:31.736 true 00:14:31.736 13:25:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:31.736 13:25:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.998 13:25:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.998 13:25:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:31.998 13:25:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:32.259 true 00:14:32.259 13:25:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:32.260 13:25:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.521 13:25:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.521 13:25:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:32.521 13:25:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:32.782 true 00:14:32.782 13:25:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:32.782 13:25:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.782 13:25:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.043 13:25:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:33.043 13:25:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:33.043 true 00:14:33.305 13:25:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:33.305 13:25:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.305 13:25:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.566 13:25:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:33.567 13:25:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:33.567 true 00:14:33.567 13:25:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:33.567 13:25:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.827 13:25:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.088 13:25:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:34.088 13:25:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:34.088 true 00:14:34.088 13:25:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:34.088 13:25:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.349 13:25:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.349 13:25:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:34.349 13:25:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:34.610 true 00:14:34.610 13:25:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:34.610 13:25:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.610 13:25:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.871 13:25:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:34.871 13:25:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:35.132 true 00:14:35.132 13:25:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:35.132 13:25:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.132 13:25:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.393 13:25:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:35.393 13:25:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:35.393 true 00:14:35.393 13:25:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:35.654 13:25:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.654 13:25:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.914 13:25:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:35.914 13:25:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:35.914 true 00:14:35.914 13:25:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:35.914 13:25:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.174 13:25:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.435 13:25:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:36.435 13:25:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:36.435 true 00:14:36.435 13:25:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:36.435 13:25:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.696 13:25:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.696 13:25:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:36.696 13:25:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:36.958 true 00:14:36.958 13:25:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:36.958 13:25:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.220 13:25:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.220 13:25:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:37.220 13:25:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:37.481 true 00:14:37.481 13:25:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:37.481 13:25:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.742 13:25:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.742 13:25:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:37.742 13:25:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:38.003 true 00:14:38.003 13:25:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:38.003 13:25:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.003 13:25:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.264 13:25:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:38.264 13:25:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:38.525 true 00:14:38.525 13:25:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:38.525 13:25:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.526 13:25:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.787 13:25:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:38.787 13:25:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:38.787 true 00:14:38.787 13:25:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:38.787 13:25:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.048 13:25:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.309 13:25:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:39.309 13:25:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:39.309 true 00:14:39.309 13:25:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:39.309 13:25:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.569 13:25:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.830 13:25:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:39.830 13:25:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:39.830 true 00:14:39.830 13:25:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:39.830 13:25:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.113 13:25:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.113 13:25:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:40.113 13:25:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:40.391 true 00:14:40.391 13:25:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:40.391 13:25:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.652 13:25:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.652 13:25:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:40.652 13:25:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:40.913 true 00:14:40.913 13:25:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:40.913 13:25:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.913 13:25:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.177 13:25:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:41.177 13:25:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:41.438 true 00:14:41.438 13:25:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:41.438 13:25:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.438 13:25:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.699 13:25:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:41.699 13:25:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:41.699 true 00:14:41.959 13:25:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:41.959 13:25:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.960 13:25:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.220 13:25:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:42.220 13:25:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:42.220 true 00:14:42.220 13:25:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:42.220 13:25:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.482 13:25:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.743 13:25:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:42.743 13:25:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:42.743 true 00:14:42.743 13:25:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:42.743 13:25:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.004 13:25:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.264 13:25:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:43.264 13:25:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:43.264 true 00:14:43.264 13:25:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:43.264 13:25:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.525 13:25:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.525 13:25:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:43.525 13:25:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:43.785 true 00:14:43.785 13:25:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:43.785 13:25:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.045 13:25:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.045 13:25:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:44.045 13:25:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:44.306 true 00:14:44.306 13:25:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:44.306 13:25:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.567 13:25:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.567 13:25:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:44.567 13:25:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:44.828 true 00:14:44.828 13:25:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:44.828 13:25:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.828 13:25:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.089 13:25:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:45.089 13:25:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:45.350 true 00:14:45.350 13:25:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:45.350 13:25:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.350 13:25:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.617 13:25:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:45.617 13:25:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:45.617 true 00:14:45.884 13:25:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:45.884 13:25:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.884 13:25:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.145 13:25:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:46.145 13:25:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:46.145 true 00:14:46.145 13:25:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:46.145 13:25:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.405 13:25:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.683 13:25:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:46.683 13:25:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:46.683 true 00:14:46.683 13:25:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:46.683 13:25:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.944 13:25:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.205 13:25:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:47.205 13:25:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:47.205 true 00:14:47.205 13:25:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:47.205 13:25:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.466 13:25:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.466 13:25:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:47.466 13:25:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:47.728 true 00:14:47.728 13:25:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:47.728 13:25:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.989 13:25:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.989 13:25:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:47.989 13:25:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:48.250 true 00:14:48.250 13:25:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:48.250 13:25:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.511 13:25:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.511 13:25:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:48.511 13:25:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:48.772 true 00:14:48.772 13:25:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:48.772 13:25:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.772 13:25:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.033 13:25:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:49.033 13:25:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:49.294 true 00:14:49.295 13:25:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:49.295 13:25:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.295 13:25:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.556 13:25:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:49.556 13:25:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:49.556 true 00:14:49.817 13:25:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:49.817 13:25:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.817 13:25:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.079 13:25:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:50.079 13:25:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:50.079 true 00:14:50.079 13:25:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:50.079 13:25:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.341 13:25:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.603 13:25:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:50.604 13:25:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:50.604 true 00:14:50.604 13:25:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:50.604 13:25:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.865 13:25:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.865 13:25:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:50.865 13:25:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:51.126 true 00:14:51.126 13:25:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:51.126 13:25:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.388 13:25:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.388 13:25:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:51.388 13:25:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:51.649 true 00:14:51.649 13:25:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:51.649 13:25:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.911 13:25:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.911 13:25:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:51.911 13:25:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:52.173 true 00:14:52.173 13:25:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:52.173 13:25:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.173 13:25:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.434 13:25:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:14:52.434 13:25:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:52.696 true 00:14:52.696 13:25:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:52.696 13:25:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.696 Initializing NVMe Controllers 00:14:52.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.696 Controller IO queue size 128, less than required. 00:14:52.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:52.696 Initialization complete. Launching workers. 00:14:52.696 ======================================================== 00:14:52.696 Latency(us) 00:14:52.696 Device Information : IOPS MiB/s Average min max 00:14:52.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33875.59 16.54 3778.41 1930.90 11899.18 00:14:52.696 ======================================================== 00:14:52.696 Total : 33875.59 16.54 3778.41 1930.90 11899.18 00:14:52.696 00:14:52.696 13:25:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.956 13:25:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:14:52.957 13:25:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:14:53.218 true 00:14:53.218 13:25:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 880422 00:14:53.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (880422) - No such process 00:14:53.218 13:25:50 -- target/ns_hotplug_stress.sh@53 -- # wait 880422 00:14:53.218 13:25:50 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.218 13:25:50 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:53.479 null0 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.479 13:25:50 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:53.741 null1 00:14:53.741 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.741 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.741 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:54.000 null2 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:54.000 null3 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.000 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:54.260 null4 00:14:54.260 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.260 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.260 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:54.260 null5 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:54.521 null6 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.521 13:25:51 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:54.782 null7 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@66 -- # wait 887033 887034 887036 887038 887040 887042 887043 887046 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:54.782 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.044 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.305 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.566 13:25:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.566 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.566 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.566 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.856 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.117 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.378 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.639 13:25:53 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.639 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.639 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.639 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.639 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.639 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.640 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.901 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.902 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.163 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.424 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.425 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.686 13:25:54 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.686 13:25:55 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.947 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:58.207 13:25:55 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:58.207 13:25:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:58.207 13:25:55 -- nvmf/common.sh@116 -- # sync 00:14:58.207 13:25:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:58.207 13:25:55 -- nvmf/common.sh@119 -- # set +e 00:14:58.207 13:25:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:58.207 13:25:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:58.207 rmmod nvme_tcp 00:14:58.207 rmmod nvme_fabrics 00:14:58.207 rmmod nvme_keyring 00:14:58.207 13:25:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:58.207 13:25:55 -- nvmf/common.sh@123 -- # set -e 00:14:58.207 13:25:55 -- nvmf/common.sh@124 -- # return 0 00:14:58.207 13:25:55 -- nvmf/common.sh@477 -- # '[' -n 879856 ']' 00:14:58.207 13:25:55 -- nvmf/common.sh@478 -- # killprocess 879856 00:14:58.207 13:25:55 -- common/autotest_common.sh@926 -- # '[' -z 879856 ']' 00:14:58.207 13:25:55 -- common/autotest_common.sh@930 -- # kill -0 879856 00:14:58.207 13:25:55 -- common/autotest_common.sh@931 -- # uname 00:14:58.207 13:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.207 13:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 879856 00:14:58.207 13:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:58.207 13:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:58.207 13:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 879856' 00:14:58.207 killing process with pid 879856 00:14:58.207 13:25:55 -- common/autotest_common.sh@945 -- # kill 879856 00:14:58.207 13:25:55 -- common/autotest_common.sh@950 -- # wait 879856 00:14:58.468 13:25:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:58.468 13:25:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:58.468 13:25:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:58.468 13:25:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.468 13:25:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:58.468 13:25:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.468 13:25:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.468 13:25:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.385 13:25:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:00.385 00:15:00.385 real 0m47.761s 00:15:00.385 user 3m13.909s 00:15:00.385 sys 0m16.719s 00:15:00.385 13:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.385 13:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:00.385 ************************************ 00:15:00.385 END TEST nvmf_ns_hotplug_stress 00:15:00.385 ************************************ 00:15:00.385 13:25:57 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:00.385 13:25:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.386 13:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.386 13:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:00.386 ************************************ 00:15:00.386 START TEST nvmf_connect_stress 00:15:00.386 ************************************ 00:15:00.386 13:25:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:00.648 * Looking for test storage... 00:15:00.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.648 13:25:57 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.648 13:25:57 -- nvmf/common.sh@7 -- # uname -s 00:15:00.648 13:25:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.648 13:25:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.648 13:25:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.648 13:25:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.648 13:25:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.648 13:25:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.648 13:25:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.648 13:25:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.648 13:25:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.648 13:25:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.648 13:25:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.648 13:25:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.648 13:25:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.648 13:25:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.648 13:25:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.648 13:25:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.648 13:25:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.648 13:25:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.648 13:25:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.648 13:25:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.648 13:25:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.648 13:25:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.648 13:25:57 -- paths/export.sh@5 -- # export PATH 00:15:00.648 13:25:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.648 13:25:57 -- nvmf/common.sh@46 -- # : 0 00:15:00.648 13:25:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:00.648 13:25:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:00.648 13:25:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:00.648 13:25:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.648 13:25:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.648 13:25:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:00.648 13:25:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:00.648 13:25:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:00.648 13:25:57 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:00.648 13:25:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:00.648 13:25:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.648 13:25:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:00.648 13:25:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:00.648 13:25:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:00.648 13:25:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.648 13:25:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.648 13:25:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.648 13:25:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:00.648 13:25:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:00.648 13:25:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:00.648 13:25:57 -- common/autotest_common.sh@10 -- # set +x 00:15:08.802 13:26:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.802 13:26:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.802 13:26:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.802 13:26:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.802 13:26:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.802 13:26:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.802 13:26:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.802 13:26:04 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.802 13:26:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.802 13:26:04 -- nvmf/common.sh@295 -- # e810=() 00:15:08.802 13:26:04 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.802 13:26:04 -- nvmf/common.sh@296 -- # x722=() 00:15:08.802 13:26:04 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.803 13:26:04 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.803 13:26:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.803 13:26:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.803 13:26:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.803 13:26:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:08.803 13:26:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.803 13:26:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:08.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:08.803 13:26:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.803 13:26:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:08.803 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:08.803 13:26:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.803 13:26:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.803 13:26:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.803 13:26:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:08.803 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:08.803 13:26:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.803 13:26:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.803 13:26:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.803 13:26:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.803 13:26:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:08.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:08.803 13:26:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.803 13:26:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.803 13:26:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:08.803 13:26:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:08.803 13:26:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.803 13:26:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.803 13:26:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.803 13:26:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:08.803 13:26:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.803 13:26:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.803 13:26:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:08.803 13:26:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.803 13:26:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.803 13:26:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:08.803 13:26:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:08.803 13:26:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.803 13:26:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.803 13:26:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.803 13:26:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.803 13:26:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:08.803 13:26:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.803 13:26:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.803 13:26:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.803 13:26:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:08.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:15:08.803 00:15:08.803 --- 10.0.0.2 ping statistics --- 00:15:08.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.803 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:15:08.803 13:26:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:15:08.803 00:15:08.803 --- 10.0.0.1 ping statistics --- 00:15:08.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.803 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:15:08.803 13:26:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.803 13:26:05 -- nvmf/common.sh@410 -- # return 0 00:15:08.803 13:26:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.803 13:26:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.803 13:26:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.803 13:26:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.803 13:26:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.803 13:26:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.803 13:26:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.803 13:26:05 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:08.803 13:26:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.803 13:26:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:08.803 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:08.803 13:26:05 -- nvmf/common.sh@469 -- # nvmfpid=891998 00:15:08.803 13:26:05 -- nvmf/common.sh@470 -- # waitforlisten 891998 00:15:08.803 13:26:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:08.803 13:26:05 -- common/autotest_common.sh@819 -- # '[' -z 891998 ']' 00:15:08.803 13:26:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.803 13:26:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.803 13:26:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.803 13:26:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.803 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:08.803 [2024-07-26 13:26:05.206537] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:08.803 [2024-07-26 13:26:05.206589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.803 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.803 [2024-07-26 13:26:05.289613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.803 [2024-07-26 13:26:05.332519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.803 [2024-07-26 13:26:05.332688] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.804 [2024-07-26 13:26:05.332699] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.804 [2024-07-26 13:26:05.332709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.804 [2024-07-26 13:26:05.332846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.804 [2024-07-26 13:26:05.333054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.804 [2024-07-26 13:26:05.333055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.804 13:26:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:08.804 13:26:05 -- common/autotest_common.sh@852 -- # return 0 00:15:08.804 13:26:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.804 13:26:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:08.804 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 13:26:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.804 13:26:06 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.804 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.804 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 [2024-07-26 13:26:06.024052] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.804 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.804 13:26:06 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:08.804 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.804 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.804 13:26:06 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.804 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.804 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 [2024-07-26 13:26:06.060328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.804 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.804 13:26:06 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:08.804 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.804 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 NULL1 00:15:08.804 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.804 13:26:06 -- target/connect_stress.sh@21 -- # PERF_PID=892270 00:15:08.804 13:26:06 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:08.804 13:26:06 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:08.804 13:26:06 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:08.804 13:26:06 -- target/connect_stress.sh@28 -- # cat 00:15:08.804 13:26:06 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:08.804 13:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.804 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:08.804 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:09.066 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.066 13:26:06 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:09.066 13:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.066 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.066 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:09.638 13:26:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.638 13:26:06 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:09.638 13:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.638 13:26:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.638 13:26:06 -- common/autotest_common.sh@10 -- # set +x 00:15:09.899 13:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.899 13:26:07 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:09.899 13:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.899 13:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.899 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.159 13:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.159 13:26:07 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:10.159 13:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.159 13:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.159 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.420 13:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.420 13:26:07 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:10.420 13:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.420 13:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.420 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.682 13:26:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.682 13:26:08 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:10.682 13:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.682 13:26:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.682 13:26:08 -- common/autotest_common.sh@10 -- # set +x 00:15:11.255 13:26:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.255 13:26:08 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:11.255 13:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.255 13:26:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.255 13:26:08 -- common/autotest_common.sh@10 -- # set +x 00:15:11.517 13:26:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.517 13:26:08 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:11.517 13:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.517 13:26:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.517 13:26:08 -- common/autotest_common.sh@10 -- # set +x 00:15:11.778 13:26:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.778 13:26:09 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:11.778 13:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.778 13:26:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.778 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:12.039 13:26:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.039 13:26:09 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:12.039 13:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.039 13:26:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.039 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:12.300 13:26:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.300 13:26:09 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:12.300 13:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.300 13:26:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.300 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:12.873 13:26:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.873 13:26:10 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:12.873 13:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.873 13:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.873 13:26:10 -- common/autotest_common.sh@10 -- # set +x 00:15:13.134 13:26:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.134 13:26:10 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:13.134 13:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.134 13:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.134 13:26:10 -- common/autotest_common.sh@10 -- # set +x 00:15:13.394 13:26:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.394 13:26:10 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:13.394 13:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.394 13:26:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.394 13:26:10 -- common/autotest_common.sh@10 -- # set +x 00:15:13.655 13:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.656 13:26:11 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:13.656 13:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.656 13:26:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.656 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:13.917 13:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.917 13:26:11 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:13.917 13:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.917 13:26:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.917 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:14.490 13:26:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.490 13:26:11 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:14.490 13:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.490 13:26:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.490 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:14.751 13:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.751 13:26:12 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:14.751 13:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.751 13:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.751 13:26:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.013 13:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.013 13:26:12 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:15.013 13:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.013 13:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.013 13:26:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.274 13:26:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.274 13:26:12 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:15.274 13:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.274 13:26:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.274 13:26:12 -- common/autotest_common.sh@10 -- # set +x 00:15:15.845 13:26:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:15.845 13:26:13 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:15.845 13:26:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.845 13:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:15.845 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.106 13:26:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.106 13:26:13 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:16.106 13:26:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.106 13:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.106 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.365 13:26:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.365 13:26:13 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:16.365 13:26:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.365 13:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.365 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.626 13:26:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.626 13:26:13 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:16.626 13:26:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.626 13:26:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.626 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:16.887 13:26:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.887 13:26:14 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:16.887 13:26:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.887 13:26:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.887 13:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.459 13:26:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.459 13:26:14 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:17.459 13:26:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.459 13:26:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.459 13:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.720 13:26:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.720 13:26:14 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:17.720 13:26:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.720 13:26:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.720 13:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:17.980 13:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.980 13:26:15 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:17.980 13:26:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.980 13:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.980 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:18.240 13:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.240 13:26:15 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:18.240 13:26:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.240 13:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.240 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:18.511 13:26:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.511 13:26:15 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:18.511 13:26:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.511 13:26:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.511 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:18.812 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:19.072 13:26:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.072 13:26:16 -- target/connect_stress.sh@34 -- # kill -0 892270 00:15:19.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (892270) - No such process 00:15:19.072 13:26:16 -- target/connect_stress.sh@38 -- # wait 892270 00:15:19.072 13:26:16 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:19.072 13:26:16 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:19.072 13:26:16 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:19.072 13:26:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.073 13:26:16 -- nvmf/common.sh@116 -- # sync 00:15:19.073 13:26:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.073 13:26:16 -- nvmf/common.sh@119 -- # set +e 00:15:19.073 13:26:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.073 13:26:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.073 rmmod nvme_tcp 00:15:19.073 rmmod nvme_fabrics 00:15:19.073 rmmod nvme_keyring 00:15:19.073 13:26:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.073 13:26:16 -- nvmf/common.sh@123 -- # set -e 00:15:19.073 13:26:16 -- nvmf/common.sh@124 -- # return 0 00:15:19.073 13:26:16 -- nvmf/common.sh@477 -- # '[' -n 891998 ']' 00:15:19.073 13:26:16 -- nvmf/common.sh@478 -- # killprocess 891998 00:15:19.073 13:26:16 -- common/autotest_common.sh@926 -- # '[' -z 891998 ']' 00:15:19.073 13:26:16 -- common/autotest_common.sh@930 -- # kill -0 891998 00:15:19.073 13:26:16 -- common/autotest_common.sh@931 -- # uname 00:15:19.073 13:26:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:19.073 13:26:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 891998 00:15:19.073 13:26:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:19.073 13:26:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:19.073 13:26:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 891998' 00:15:19.073 killing process with pid 891998 00:15:19.073 13:26:16 -- common/autotest_common.sh@945 -- # kill 891998 00:15:19.073 13:26:16 -- common/autotest_common.sh@950 -- # wait 891998 00:15:19.073 13:26:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.073 13:26:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.073 13:26:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.073 13:26:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.073 13:26:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.073 13:26:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.073 13:26:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.073 13:26:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.623 13:26:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:21.623 00:15:21.623 real 0m20.740s 00:15:21.623 user 0m41.946s 00:15:21.623 sys 0m8.603s 00:15:21.623 13:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.623 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:15:21.623 ************************************ 00:15:21.623 END TEST nvmf_connect_stress 00:15:21.623 ************************************ 00:15:21.623 13:26:18 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:21.623 13:26:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.623 13:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.623 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:15:21.623 ************************************ 00:15:21.623 START TEST nvmf_fused_ordering 00:15:21.623 ************************************ 00:15:21.623 13:26:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:21.623 * Looking for test storage... 00:15:21.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.623 13:26:18 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.623 13:26:18 -- nvmf/common.sh@7 -- # uname -s 00:15:21.623 13:26:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.623 13:26:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.623 13:26:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.623 13:26:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.623 13:26:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.623 13:26:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.623 13:26:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.623 13:26:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.623 13:26:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.623 13:26:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.623 13:26:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.623 13:26:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:21.623 13:26:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.623 13:26:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.623 13:26:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.623 13:26:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.623 13:26:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.623 13:26:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.623 13:26:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.623 13:26:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.623 13:26:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.623 13:26:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.623 13:26:18 -- paths/export.sh@5 -- # export PATH 00:15:21.623 13:26:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.623 13:26:18 -- nvmf/common.sh@46 -- # : 0 00:15:21.623 13:26:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.623 13:26:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.623 13:26:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.623 13:26:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.623 13:26:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.623 13:26:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.623 13:26:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.623 13:26:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.623 13:26:18 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:21.623 13:26:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.623 13:26:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.623 13:26:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.623 13:26:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.623 13:26:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.623 13:26:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.623 13:26:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.623 13:26:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.623 13:26:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:21.623 13:26:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:21.623 13:26:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:21.623 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:15:28.220 13:26:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:28.221 13:26:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:28.221 13:26:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:28.221 13:26:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:28.221 13:26:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:28.221 13:26:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:28.221 13:26:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:28.221 13:26:25 -- nvmf/common.sh@294 -- # net_devs=() 00:15:28.221 13:26:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:28.221 13:26:25 -- nvmf/common.sh@295 -- # e810=() 00:15:28.221 13:26:25 -- nvmf/common.sh@295 -- # local -ga e810 00:15:28.221 13:26:25 -- nvmf/common.sh@296 -- # x722=() 00:15:28.221 13:26:25 -- nvmf/common.sh@296 -- # local -ga x722 00:15:28.221 13:26:25 -- nvmf/common.sh@297 -- # mlx=() 00:15:28.221 13:26:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:28.221 13:26:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.221 13:26:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.221 13:26:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:28.221 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:28.221 13:26:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:28.221 13:26:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:28.221 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:28.221 13:26:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.221 13:26:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.221 13:26:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.221 13:26:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:28.221 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:28.221 13:26:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:28.221 13:26:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.221 13:26:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.221 13:26:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:28.221 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:28.221 13:26:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:28.221 13:26:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:28.221 13:26:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.221 13:26:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.221 13:26:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:28.221 13:26:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.221 13:26:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.221 13:26:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:28.221 13:26:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.221 13:26:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.221 13:26:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:28.221 13:26:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:28.221 13:26:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.221 13:26:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.221 13:26:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.221 13:26:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.221 13:26:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:28.221 13:26:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.221 13:26:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.221 13:26:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.221 13:26:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:28.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:15:28.221 00:15:28.221 --- 10.0.0.2 ping statistics --- 00:15:28.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.221 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:15:28.221 13:26:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:15:28.221 00:15:28.221 --- 10.0.0.1 ping statistics --- 00:15:28.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.221 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:15:28.221 13:26:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.221 13:26:25 -- nvmf/common.sh@410 -- # return 0 00:15:28.221 13:26:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:28.221 13:26:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.221 13:26:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:28.221 13:26:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.221 13:26:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:28.221 13:26:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:28.221 13:26:25 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:28.221 13:26:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.221 13:26:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:28.221 13:26:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.221 13:26:25 -- nvmf/common.sh@469 -- # nvmfpid=898332 00:15:28.222 13:26:25 -- nvmf/common.sh@470 -- # waitforlisten 898332 00:15:28.222 13:26:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.222 13:26:25 -- common/autotest_common.sh@819 -- # '[' -z 898332 ']' 00:15:28.222 13:26:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.222 13:26:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.222 13:26:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.222 13:26:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.222 13:26:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.222 [2024-07-26 13:26:25.660693] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:28.222 [2024-07-26 13:26:25.660756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.484 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.484 [2024-07-26 13:26:25.749675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.484 [2024-07-26 13:26:25.794102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.484 [2024-07-26 13:26:25.794254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.484 [2024-07-26 13:26:25.794266] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.484 [2024-07-26 13:26:25.794273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.484 [2024-07-26 13:26:25.794296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.058 13:26:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.058 13:26:26 -- common/autotest_common.sh@852 -- # return 0 00:15:29.058 13:26:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.058 13:26:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 13:26:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.058 13:26:26 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.058 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 [2024-07-26 13:26:26.485117] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.058 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.058 13:26:26 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.058 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.058 13:26:26 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.058 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 [2024-07-26 13:26:26.509342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.058 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.058 13:26:26 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:29.058 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 NULL1 00:15:29.058 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.058 13:26:26 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:29.058 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.058 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.320 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.320 13:26:26 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:29.320 13:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.320 13:26:26 -- common/autotest_common.sh@10 -- # set +x 00:15:29.320 13:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.320 13:26:26 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:29.320 [2024-07-26 13:26:26.577661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:29.320 [2024-07-26 13:26:26.577725] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898681 ] 00:15:29.320 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.893 Attached to nqn.2016-06.io.spdk:cnode1 00:15:29.893 Namespace ID: 1 size: 1GB 00:15:29.893 fused_ordering(0) 00:15:29.893 fused_ordering(1) 00:15:29.893 fused_ordering(2) 00:15:29.893 fused_ordering(3) 00:15:29.893 fused_ordering(4) 00:15:29.893 fused_ordering(5) 00:15:29.893 fused_ordering(6) 00:15:29.893 fused_ordering(7) 00:15:29.893 fused_ordering(8) 00:15:29.893 fused_ordering(9) 00:15:29.893 fused_ordering(10) 00:15:29.893 fused_ordering(11) 00:15:29.893 fused_ordering(12) 00:15:29.893 fused_ordering(13) 00:15:29.893 fused_ordering(14) 00:15:29.893 fused_ordering(15) 00:15:29.893 fused_ordering(16) 00:15:29.893 fused_ordering(17) 00:15:29.893 fused_ordering(18) 00:15:29.893 fused_ordering(19) 00:15:29.893 fused_ordering(20) 00:15:29.893 fused_ordering(21) 00:15:29.893 fused_ordering(22) 00:15:29.893 fused_ordering(23) 00:15:29.893 fused_ordering(24) 00:15:29.893 fused_ordering(25) 00:15:29.893 fused_ordering(26) 00:15:29.893 fused_ordering(27) 00:15:29.893 fused_ordering(28) 00:15:29.893 fused_ordering(29) 00:15:29.893 fused_ordering(30) 00:15:29.893 fused_ordering(31) 00:15:29.893 fused_ordering(32) 00:15:29.893 fused_ordering(33) 00:15:29.893 fused_ordering(34) 00:15:29.893 fused_ordering(35) 00:15:29.893 fused_ordering(36) 00:15:29.893 fused_ordering(37) 00:15:29.893 fused_ordering(38) 00:15:29.893 fused_ordering(39) 00:15:29.893 fused_ordering(40) 00:15:29.893 fused_ordering(41) 00:15:29.893 fused_ordering(42) 00:15:29.893 fused_ordering(43) 00:15:29.893 fused_ordering(44) 00:15:29.893 fused_ordering(45) 00:15:29.893 fused_ordering(46) 00:15:29.893 fused_ordering(47) 00:15:29.893 fused_ordering(48) 00:15:29.893 fused_ordering(49) 00:15:29.893 fused_ordering(50) 00:15:29.893 fused_ordering(51) 00:15:29.893 fused_ordering(52) 00:15:29.893 fused_ordering(53) 00:15:29.893 fused_ordering(54) 00:15:29.893 fused_ordering(55) 00:15:29.893 fused_ordering(56) 00:15:29.893 fused_ordering(57) 00:15:29.893 fused_ordering(58) 00:15:29.893 fused_ordering(59) 00:15:29.893 fused_ordering(60) 00:15:29.893 fused_ordering(61) 00:15:29.893 fused_ordering(62) 00:15:29.893 fused_ordering(63) 00:15:29.893 fused_ordering(64) 00:15:29.893 fused_ordering(65) 00:15:29.893 fused_ordering(66) 00:15:29.893 fused_ordering(67) 00:15:29.893 fused_ordering(68) 00:15:29.893 fused_ordering(69) 00:15:29.893 fused_ordering(70) 00:15:29.893 fused_ordering(71) 00:15:29.893 fused_ordering(72) 00:15:29.893 fused_ordering(73) 00:15:29.893 fused_ordering(74) 00:15:29.893 fused_ordering(75) 00:15:29.893 fused_ordering(76) 00:15:29.893 fused_ordering(77) 00:15:29.893 fused_ordering(78) 00:15:29.893 fused_ordering(79) 00:15:29.893 fused_ordering(80) 00:15:29.893 fused_ordering(81) 00:15:29.893 fused_ordering(82) 00:15:29.893 fused_ordering(83) 00:15:29.893 fused_ordering(84) 00:15:29.893 fused_ordering(85) 00:15:29.893 fused_ordering(86) 00:15:29.893 fused_ordering(87) 00:15:29.893 fused_ordering(88) 00:15:29.893 fused_ordering(89) 00:15:29.893 fused_ordering(90) 00:15:29.893 fused_ordering(91) 00:15:29.893 fused_ordering(92) 00:15:29.893 fused_ordering(93) 00:15:29.893 fused_ordering(94) 00:15:29.893 fused_ordering(95) 00:15:29.893 fused_ordering(96) 00:15:29.893 fused_ordering(97) 00:15:29.893 fused_ordering(98) 00:15:29.893 fused_ordering(99) 00:15:29.893 fused_ordering(100) 00:15:29.893 fused_ordering(101) 00:15:29.893 fused_ordering(102) 00:15:29.893 fused_ordering(103) 00:15:29.893 fused_ordering(104) 00:15:29.893 fused_ordering(105) 00:15:29.893 fused_ordering(106) 00:15:29.893 fused_ordering(107) 00:15:29.893 fused_ordering(108) 00:15:29.893 fused_ordering(109) 00:15:29.893 fused_ordering(110) 00:15:29.893 fused_ordering(111) 00:15:29.893 fused_ordering(112) 00:15:29.893 fused_ordering(113) 00:15:29.893 fused_ordering(114) 00:15:29.893 fused_ordering(115) 00:15:29.893 fused_ordering(116) 00:15:29.893 fused_ordering(117) 00:15:29.893 fused_ordering(118) 00:15:29.893 fused_ordering(119) 00:15:29.893 fused_ordering(120) 00:15:29.893 fused_ordering(121) 00:15:29.893 fused_ordering(122) 00:15:29.893 fused_ordering(123) 00:15:29.893 fused_ordering(124) 00:15:29.893 fused_ordering(125) 00:15:29.893 fused_ordering(126) 00:15:29.893 fused_ordering(127) 00:15:29.893 fused_ordering(128) 00:15:29.893 fused_ordering(129) 00:15:29.893 fused_ordering(130) 00:15:29.893 fused_ordering(131) 00:15:29.893 fused_ordering(132) 00:15:29.893 fused_ordering(133) 00:15:29.893 fused_ordering(134) 00:15:29.893 fused_ordering(135) 00:15:29.893 fused_ordering(136) 00:15:29.893 fused_ordering(137) 00:15:29.893 fused_ordering(138) 00:15:29.893 fused_ordering(139) 00:15:29.893 fused_ordering(140) 00:15:29.893 fused_ordering(141) 00:15:29.893 fused_ordering(142) 00:15:29.893 fused_ordering(143) 00:15:29.893 fused_ordering(144) 00:15:29.893 fused_ordering(145) 00:15:29.893 fused_ordering(146) 00:15:29.893 fused_ordering(147) 00:15:29.893 fused_ordering(148) 00:15:29.893 fused_ordering(149) 00:15:29.893 fused_ordering(150) 00:15:29.893 fused_ordering(151) 00:15:29.893 fused_ordering(152) 00:15:29.893 fused_ordering(153) 00:15:29.893 fused_ordering(154) 00:15:29.893 fused_ordering(155) 00:15:29.893 fused_ordering(156) 00:15:29.893 fused_ordering(157) 00:15:29.893 fused_ordering(158) 00:15:29.893 fused_ordering(159) 00:15:29.893 fused_ordering(160) 00:15:29.893 fused_ordering(161) 00:15:29.893 fused_ordering(162) 00:15:29.893 fused_ordering(163) 00:15:29.893 fused_ordering(164) 00:15:29.893 fused_ordering(165) 00:15:29.893 fused_ordering(166) 00:15:29.893 fused_ordering(167) 00:15:29.893 fused_ordering(168) 00:15:29.893 fused_ordering(169) 00:15:29.893 fused_ordering(170) 00:15:29.893 fused_ordering(171) 00:15:29.893 fused_ordering(172) 00:15:29.893 fused_ordering(173) 00:15:29.893 fused_ordering(174) 00:15:29.893 fused_ordering(175) 00:15:29.893 fused_ordering(176) 00:15:29.893 fused_ordering(177) 00:15:29.893 fused_ordering(178) 00:15:29.893 fused_ordering(179) 00:15:29.893 fused_ordering(180) 00:15:29.893 fused_ordering(181) 00:15:29.893 fused_ordering(182) 00:15:29.893 fused_ordering(183) 00:15:29.893 fused_ordering(184) 00:15:29.893 fused_ordering(185) 00:15:29.893 fused_ordering(186) 00:15:29.893 fused_ordering(187) 00:15:29.893 fused_ordering(188) 00:15:29.893 fused_ordering(189) 00:15:29.893 fused_ordering(190) 00:15:29.893 fused_ordering(191) 00:15:29.893 fused_ordering(192) 00:15:29.893 fused_ordering(193) 00:15:29.893 fused_ordering(194) 00:15:29.893 fused_ordering(195) 00:15:29.893 fused_ordering(196) 00:15:29.893 fused_ordering(197) 00:15:29.894 fused_ordering(198) 00:15:29.894 fused_ordering(199) 00:15:29.894 fused_ordering(200) 00:15:29.894 fused_ordering(201) 00:15:29.894 fused_ordering(202) 00:15:29.894 fused_ordering(203) 00:15:29.894 fused_ordering(204) 00:15:29.894 fused_ordering(205) 00:15:30.466 fused_ordering(206) 00:15:30.466 fused_ordering(207) 00:15:30.466 fused_ordering(208) 00:15:30.466 fused_ordering(209) 00:15:30.466 fused_ordering(210) 00:15:30.466 fused_ordering(211) 00:15:30.466 fused_ordering(212) 00:15:30.466 fused_ordering(213) 00:15:30.466 fused_ordering(214) 00:15:30.466 fused_ordering(215) 00:15:30.466 fused_ordering(216) 00:15:30.466 fused_ordering(217) 00:15:30.466 fused_ordering(218) 00:15:30.466 fused_ordering(219) 00:15:30.466 fused_ordering(220) 00:15:30.466 fused_ordering(221) 00:15:30.466 fused_ordering(222) 00:15:30.466 fused_ordering(223) 00:15:30.466 fused_ordering(224) 00:15:30.466 fused_ordering(225) 00:15:30.466 fused_ordering(226) 00:15:30.466 fused_ordering(227) 00:15:30.466 fused_ordering(228) 00:15:30.466 fused_ordering(229) 00:15:30.466 fused_ordering(230) 00:15:30.466 fused_ordering(231) 00:15:30.466 fused_ordering(232) 00:15:30.466 fused_ordering(233) 00:15:30.466 fused_ordering(234) 00:15:30.466 fused_ordering(235) 00:15:30.466 fused_ordering(236) 00:15:30.466 fused_ordering(237) 00:15:30.466 fused_ordering(238) 00:15:30.466 fused_ordering(239) 00:15:30.466 fused_ordering(240) 00:15:30.466 fused_ordering(241) 00:15:30.466 fused_ordering(242) 00:15:30.466 fused_ordering(243) 00:15:30.466 fused_ordering(244) 00:15:30.466 fused_ordering(245) 00:15:30.466 fused_ordering(246) 00:15:30.466 fused_ordering(247) 00:15:30.466 fused_ordering(248) 00:15:30.466 fused_ordering(249) 00:15:30.466 fused_ordering(250) 00:15:30.466 fused_ordering(251) 00:15:30.466 fused_ordering(252) 00:15:30.466 fused_ordering(253) 00:15:30.466 fused_ordering(254) 00:15:30.466 fused_ordering(255) 00:15:30.466 fused_ordering(256) 00:15:30.466 fused_ordering(257) 00:15:30.466 fused_ordering(258) 00:15:30.466 fused_ordering(259) 00:15:30.466 fused_ordering(260) 00:15:30.466 fused_ordering(261) 00:15:30.466 fused_ordering(262) 00:15:30.466 fused_ordering(263) 00:15:30.466 fused_ordering(264) 00:15:30.466 fused_ordering(265) 00:15:30.467 fused_ordering(266) 00:15:30.467 fused_ordering(267) 00:15:30.467 fused_ordering(268) 00:15:30.467 fused_ordering(269) 00:15:30.467 fused_ordering(270) 00:15:30.467 fused_ordering(271) 00:15:30.467 fused_ordering(272) 00:15:30.467 fused_ordering(273) 00:15:30.467 fused_ordering(274) 00:15:30.467 fused_ordering(275) 00:15:30.467 fused_ordering(276) 00:15:30.467 fused_ordering(277) 00:15:30.467 fused_ordering(278) 00:15:30.467 fused_ordering(279) 00:15:30.467 fused_ordering(280) 00:15:30.467 fused_ordering(281) 00:15:30.467 fused_ordering(282) 00:15:30.467 fused_ordering(283) 00:15:30.467 fused_ordering(284) 00:15:30.467 fused_ordering(285) 00:15:30.467 fused_ordering(286) 00:15:30.467 fused_ordering(287) 00:15:30.467 fused_ordering(288) 00:15:30.467 fused_ordering(289) 00:15:30.467 fused_ordering(290) 00:15:30.467 fused_ordering(291) 00:15:30.467 fused_ordering(292) 00:15:30.467 fused_ordering(293) 00:15:30.467 fused_ordering(294) 00:15:30.467 fused_ordering(295) 00:15:30.467 fused_ordering(296) 00:15:30.467 fused_ordering(297) 00:15:30.467 fused_ordering(298) 00:15:30.467 fused_ordering(299) 00:15:30.467 fused_ordering(300) 00:15:30.467 fused_ordering(301) 00:15:30.467 fused_ordering(302) 00:15:30.467 fused_ordering(303) 00:15:30.467 fused_ordering(304) 00:15:30.467 fused_ordering(305) 00:15:30.467 fused_ordering(306) 00:15:30.467 fused_ordering(307) 00:15:30.467 fused_ordering(308) 00:15:30.467 fused_ordering(309) 00:15:30.467 fused_ordering(310) 00:15:30.467 fused_ordering(311) 00:15:30.467 fused_ordering(312) 00:15:30.467 fused_ordering(313) 00:15:30.467 fused_ordering(314) 00:15:30.467 fused_ordering(315) 00:15:30.467 fused_ordering(316) 00:15:30.467 fused_ordering(317) 00:15:30.467 fused_ordering(318) 00:15:30.467 fused_ordering(319) 00:15:30.467 fused_ordering(320) 00:15:30.467 fused_ordering(321) 00:15:30.467 fused_ordering(322) 00:15:30.467 fused_ordering(323) 00:15:30.467 fused_ordering(324) 00:15:30.467 fused_ordering(325) 00:15:30.467 fused_ordering(326) 00:15:30.467 fused_ordering(327) 00:15:30.467 fused_ordering(328) 00:15:30.467 fused_ordering(329) 00:15:30.467 fused_ordering(330) 00:15:30.467 fused_ordering(331) 00:15:30.467 fused_ordering(332) 00:15:30.467 fused_ordering(333) 00:15:30.467 fused_ordering(334) 00:15:30.467 fused_ordering(335) 00:15:30.467 fused_ordering(336) 00:15:30.467 fused_ordering(337) 00:15:30.467 fused_ordering(338) 00:15:30.467 fused_ordering(339) 00:15:30.467 fused_ordering(340) 00:15:30.467 fused_ordering(341) 00:15:30.467 fused_ordering(342) 00:15:30.467 fused_ordering(343) 00:15:30.467 fused_ordering(344) 00:15:30.467 fused_ordering(345) 00:15:30.467 fused_ordering(346) 00:15:30.467 fused_ordering(347) 00:15:30.467 fused_ordering(348) 00:15:30.467 fused_ordering(349) 00:15:30.467 fused_ordering(350) 00:15:30.467 fused_ordering(351) 00:15:30.467 fused_ordering(352) 00:15:30.467 fused_ordering(353) 00:15:30.467 fused_ordering(354) 00:15:30.467 fused_ordering(355) 00:15:30.467 fused_ordering(356) 00:15:30.467 fused_ordering(357) 00:15:30.467 fused_ordering(358) 00:15:30.467 fused_ordering(359) 00:15:30.467 fused_ordering(360) 00:15:30.467 fused_ordering(361) 00:15:30.467 fused_ordering(362) 00:15:30.467 fused_ordering(363) 00:15:30.467 fused_ordering(364) 00:15:30.467 fused_ordering(365) 00:15:30.467 fused_ordering(366) 00:15:30.467 fused_ordering(367) 00:15:30.467 fused_ordering(368) 00:15:30.467 fused_ordering(369) 00:15:30.467 fused_ordering(370) 00:15:30.467 fused_ordering(371) 00:15:30.467 fused_ordering(372) 00:15:30.467 fused_ordering(373) 00:15:30.467 fused_ordering(374) 00:15:30.467 fused_ordering(375) 00:15:30.467 fused_ordering(376) 00:15:30.467 fused_ordering(377) 00:15:30.467 fused_ordering(378) 00:15:30.467 fused_ordering(379) 00:15:30.467 fused_ordering(380) 00:15:30.467 fused_ordering(381) 00:15:30.467 fused_ordering(382) 00:15:30.467 fused_ordering(383) 00:15:30.467 fused_ordering(384) 00:15:30.467 fused_ordering(385) 00:15:30.467 fused_ordering(386) 00:15:30.467 fused_ordering(387) 00:15:30.467 fused_ordering(388) 00:15:30.467 fused_ordering(389) 00:15:30.467 fused_ordering(390) 00:15:30.467 fused_ordering(391) 00:15:30.467 fused_ordering(392) 00:15:30.467 fused_ordering(393) 00:15:30.467 fused_ordering(394) 00:15:30.467 fused_ordering(395) 00:15:30.467 fused_ordering(396) 00:15:30.467 fused_ordering(397) 00:15:30.467 fused_ordering(398) 00:15:30.467 fused_ordering(399) 00:15:30.467 fused_ordering(400) 00:15:30.467 fused_ordering(401) 00:15:30.467 fused_ordering(402) 00:15:30.467 fused_ordering(403) 00:15:30.467 fused_ordering(404) 00:15:30.467 fused_ordering(405) 00:15:30.467 fused_ordering(406) 00:15:30.467 fused_ordering(407) 00:15:30.467 fused_ordering(408) 00:15:30.467 fused_ordering(409) 00:15:30.467 fused_ordering(410) 00:15:31.040 fused_ordering(411) 00:15:31.040 fused_ordering(412) 00:15:31.040 fused_ordering(413) 00:15:31.040 fused_ordering(414) 00:15:31.040 fused_ordering(415) 00:15:31.040 fused_ordering(416) 00:15:31.040 fused_ordering(417) 00:15:31.040 fused_ordering(418) 00:15:31.040 fused_ordering(419) 00:15:31.040 fused_ordering(420) 00:15:31.040 fused_ordering(421) 00:15:31.040 fused_ordering(422) 00:15:31.040 fused_ordering(423) 00:15:31.040 fused_ordering(424) 00:15:31.040 fused_ordering(425) 00:15:31.040 fused_ordering(426) 00:15:31.040 fused_ordering(427) 00:15:31.040 fused_ordering(428) 00:15:31.040 fused_ordering(429) 00:15:31.040 fused_ordering(430) 00:15:31.040 fused_ordering(431) 00:15:31.040 fused_ordering(432) 00:15:31.040 fused_ordering(433) 00:15:31.040 fused_ordering(434) 00:15:31.040 fused_ordering(435) 00:15:31.040 fused_ordering(436) 00:15:31.040 fused_ordering(437) 00:15:31.040 fused_ordering(438) 00:15:31.040 fused_ordering(439) 00:15:31.040 fused_ordering(440) 00:15:31.040 fused_ordering(441) 00:15:31.040 fused_ordering(442) 00:15:31.040 fused_ordering(443) 00:15:31.040 fused_ordering(444) 00:15:31.040 fused_ordering(445) 00:15:31.040 fused_ordering(446) 00:15:31.040 fused_ordering(447) 00:15:31.040 fused_ordering(448) 00:15:31.040 fused_ordering(449) 00:15:31.040 fused_ordering(450) 00:15:31.040 fused_ordering(451) 00:15:31.040 fused_ordering(452) 00:15:31.040 fused_ordering(453) 00:15:31.040 fused_ordering(454) 00:15:31.040 fused_ordering(455) 00:15:31.040 fused_ordering(456) 00:15:31.040 fused_ordering(457) 00:15:31.040 fused_ordering(458) 00:15:31.040 fused_ordering(459) 00:15:31.040 fused_ordering(460) 00:15:31.040 fused_ordering(461) 00:15:31.040 fused_ordering(462) 00:15:31.040 fused_ordering(463) 00:15:31.040 fused_ordering(464) 00:15:31.040 fused_ordering(465) 00:15:31.040 fused_ordering(466) 00:15:31.040 fused_ordering(467) 00:15:31.040 fused_ordering(468) 00:15:31.040 fused_ordering(469) 00:15:31.040 fused_ordering(470) 00:15:31.040 fused_ordering(471) 00:15:31.040 fused_ordering(472) 00:15:31.040 fused_ordering(473) 00:15:31.040 fused_ordering(474) 00:15:31.040 fused_ordering(475) 00:15:31.040 fused_ordering(476) 00:15:31.040 fused_ordering(477) 00:15:31.040 fused_ordering(478) 00:15:31.040 fused_ordering(479) 00:15:31.040 fused_ordering(480) 00:15:31.040 fused_ordering(481) 00:15:31.040 fused_ordering(482) 00:15:31.040 fused_ordering(483) 00:15:31.040 fused_ordering(484) 00:15:31.040 fused_ordering(485) 00:15:31.040 fused_ordering(486) 00:15:31.040 fused_ordering(487) 00:15:31.040 fused_ordering(488) 00:15:31.040 fused_ordering(489) 00:15:31.040 fused_ordering(490) 00:15:31.040 fused_ordering(491) 00:15:31.040 fused_ordering(492) 00:15:31.040 fused_ordering(493) 00:15:31.040 fused_ordering(494) 00:15:31.040 fused_ordering(495) 00:15:31.040 fused_ordering(496) 00:15:31.040 fused_ordering(497) 00:15:31.040 fused_ordering(498) 00:15:31.040 fused_ordering(499) 00:15:31.040 fused_ordering(500) 00:15:31.040 fused_ordering(501) 00:15:31.040 fused_ordering(502) 00:15:31.040 fused_ordering(503) 00:15:31.040 fused_ordering(504) 00:15:31.040 fused_ordering(505) 00:15:31.040 fused_ordering(506) 00:15:31.040 fused_ordering(507) 00:15:31.040 fused_ordering(508) 00:15:31.040 fused_ordering(509) 00:15:31.040 fused_ordering(510) 00:15:31.040 fused_ordering(511) 00:15:31.040 fused_ordering(512) 00:15:31.040 fused_ordering(513) 00:15:31.040 fused_ordering(514) 00:15:31.040 fused_ordering(515) 00:15:31.040 fused_ordering(516) 00:15:31.040 fused_ordering(517) 00:15:31.040 fused_ordering(518) 00:15:31.040 fused_ordering(519) 00:15:31.040 fused_ordering(520) 00:15:31.040 fused_ordering(521) 00:15:31.040 fused_ordering(522) 00:15:31.040 fused_ordering(523) 00:15:31.040 fused_ordering(524) 00:15:31.040 fused_ordering(525) 00:15:31.040 fused_ordering(526) 00:15:31.040 fused_ordering(527) 00:15:31.040 fused_ordering(528) 00:15:31.040 fused_ordering(529) 00:15:31.040 fused_ordering(530) 00:15:31.040 fused_ordering(531) 00:15:31.040 fused_ordering(532) 00:15:31.040 fused_ordering(533) 00:15:31.040 fused_ordering(534) 00:15:31.040 fused_ordering(535) 00:15:31.040 fused_ordering(536) 00:15:31.040 fused_ordering(537) 00:15:31.040 fused_ordering(538) 00:15:31.040 fused_ordering(539) 00:15:31.040 fused_ordering(540) 00:15:31.040 fused_ordering(541) 00:15:31.040 fused_ordering(542) 00:15:31.040 fused_ordering(543) 00:15:31.040 fused_ordering(544) 00:15:31.040 fused_ordering(545) 00:15:31.040 fused_ordering(546) 00:15:31.040 fused_ordering(547) 00:15:31.040 fused_ordering(548) 00:15:31.040 fused_ordering(549) 00:15:31.040 fused_ordering(550) 00:15:31.040 fused_ordering(551) 00:15:31.040 fused_ordering(552) 00:15:31.040 fused_ordering(553) 00:15:31.040 fused_ordering(554) 00:15:31.040 fused_ordering(555) 00:15:31.040 fused_ordering(556) 00:15:31.040 fused_ordering(557) 00:15:31.040 fused_ordering(558) 00:15:31.040 fused_ordering(559) 00:15:31.040 fused_ordering(560) 00:15:31.040 fused_ordering(561) 00:15:31.040 fused_ordering(562) 00:15:31.040 fused_ordering(563) 00:15:31.040 fused_ordering(564) 00:15:31.040 fused_ordering(565) 00:15:31.040 fused_ordering(566) 00:15:31.040 fused_ordering(567) 00:15:31.040 fused_ordering(568) 00:15:31.040 fused_ordering(569) 00:15:31.040 fused_ordering(570) 00:15:31.040 fused_ordering(571) 00:15:31.040 fused_ordering(572) 00:15:31.040 fused_ordering(573) 00:15:31.040 fused_ordering(574) 00:15:31.040 fused_ordering(575) 00:15:31.040 fused_ordering(576) 00:15:31.040 fused_ordering(577) 00:15:31.040 fused_ordering(578) 00:15:31.040 fused_ordering(579) 00:15:31.040 fused_ordering(580) 00:15:31.040 fused_ordering(581) 00:15:31.040 fused_ordering(582) 00:15:31.040 fused_ordering(583) 00:15:31.040 fused_ordering(584) 00:15:31.040 fused_ordering(585) 00:15:31.040 fused_ordering(586) 00:15:31.040 fused_ordering(587) 00:15:31.040 fused_ordering(588) 00:15:31.040 fused_ordering(589) 00:15:31.040 fused_ordering(590) 00:15:31.040 fused_ordering(591) 00:15:31.040 fused_ordering(592) 00:15:31.040 fused_ordering(593) 00:15:31.040 fused_ordering(594) 00:15:31.040 fused_ordering(595) 00:15:31.040 fused_ordering(596) 00:15:31.040 fused_ordering(597) 00:15:31.040 fused_ordering(598) 00:15:31.040 fused_ordering(599) 00:15:31.040 fused_ordering(600) 00:15:31.040 fused_ordering(601) 00:15:31.040 fused_ordering(602) 00:15:31.040 fused_ordering(603) 00:15:31.040 fused_ordering(604) 00:15:31.040 fused_ordering(605) 00:15:31.041 fused_ordering(606) 00:15:31.041 fused_ordering(607) 00:15:31.041 fused_ordering(608) 00:15:31.041 fused_ordering(609) 00:15:31.041 fused_ordering(610) 00:15:31.041 fused_ordering(611) 00:15:31.041 fused_ordering(612) 00:15:31.041 fused_ordering(613) 00:15:31.041 fused_ordering(614) 00:15:31.041 fused_ordering(615) 00:15:31.985 fused_ordering(616) 00:15:31.985 fused_ordering(617) 00:15:31.985 fused_ordering(618) 00:15:31.985 fused_ordering(619) 00:15:31.985 fused_ordering(620) 00:15:31.985 fused_ordering(621) 00:15:31.985 fused_ordering(622) 00:15:31.985 fused_ordering(623) 00:15:31.985 fused_ordering(624) 00:15:31.985 fused_ordering(625) 00:15:31.985 fused_ordering(626) 00:15:31.985 fused_ordering(627) 00:15:31.985 fused_ordering(628) 00:15:31.985 fused_ordering(629) 00:15:31.985 fused_ordering(630) 00:15:31.985 fused_ordering(631) 00:15:31.985 fused_ordering(632) 00:15:31.985 fused_ordering(633) 00:15:31.985 fused_ordering(634) 00:15:31.985 fused_ordering(635) 00:15:31.985 fused_ordering(636) 00:15:31.985 fused_ordering(637) 00:15:31.985 fused_ordering(638) 00:15:31.985 fused_ordering(639) 00:15:31.985 fused_ordering(640) 00:15:31.985 fused_ordering(641) 00:15:31.985 fused_ordering(642) 00:15:31.985 fused_ordering(643) 00:15:31.985 fused_ordering(644) 00:15:31.985 fused_ordering(645) 00:15:31.985 fused_ordering(646) 00:15:31.985 fused_ordering(647) 00:15:31.985 fused_ordering(648) 00:15:31.985 fused_ordering(649) 00:15:31.985 fused_ordering(650) 00:15:31.985 fused_ordering(651) 00:15:31.985 fused_ordering(652) 00:15:31.985 fused_ordering(653) 00:15:31.985 fused_ordering(654) 00:15:31.985 fused_ordering(655) 00:15:31.985 fused_ordering(656) 00:15:31.985 fused_ordering(657) 00:15:31.985 fused_ordering(658) 00:15:31.985 fused_ordering(659) 00:15:31.985 fused_ordering(660) 00:15:31.985 fused_ordering(661) 00:15:31.985 fused_ordering(662) 00:15:31.985 fused_ordering(663) 00:15:31.985 fused_ordering(664) 00:15:31.985 fused_ordering(665) 00:15:31.985 fused_ordering(666) 00:15:31.985 fused_ordering(667) 00:15:31.985 fused_ordering(668) 00:15:31.985 fused_ordering(669) 00:15:31.985 fused_ordering(670) 00:15:31.985 fused_ordering(671) 00:15:31.985 fused_ordering(672) 00:15:31.985 fused_ordering(673) 00:15:31.985 fused_ordering(674) 00:15:31.985 fused_ordering(675) 00:15:31.985 fused_ordering(676) 00:15:31.985 fused_ordering(677) 00:15:31.985 fused_ordering(678) 00:15:31.985 fused_ordering(679) 00:15:31.985 fused_ordering(680) 00:15:31.985 fused_ordering(681) 00:15:31.985 fused_ordering(682) 00:15:31.985 fused_ordering(683) 00:15:31.985 fused_ordering(684) 00:15:31.985 fused_ordering(685) 00:15:31.985 fused_ordering(686) 00:15:31.985 fused_ordering(687) 00:15:31.985 fused_ordering(688) 00:15:31.985 fused_ordering(689) 00:15:31.985 fused_ordering(690) 00:15:31.985 fused_ordering(691) 00:15:31.985 fused_ordering(692) 00:15:31.985 fused_ordering(693) 00:15:31.985 fused_ordering(694) 00:15:31.985 fused_ordering(695) 00:15:31.985 fused_ordering(696) 00:15:31.985 fused_ordering(697) 00:15:31.985 fused_ordering(698) 00:15:31.985 fused_ordering(699) 00:15:31.985 fused_ordering(700) 00:15:31.985 fused_ordering(701) 00:15:31.985 fused_ordering(702) 00:15:31.985 fused_ordering(703) 00:15:31.985 fused_ordering(704) 00:15:31.985 fused_ordering(705) 00:15:31.985 fused_ordering(706) 00:15:31.985 fused_ordering(707) 00:15:31.985 fused_ordering(708) 00:15:31.985 fused_ordering(709) 00:15:31.985 fused_ordering(710) 00:15:31.985 fused_ordering(711) 00:15:31.985 fused_ordering(712) 00:15:31.985 fused_ordering(713) 00:15:31.985 fused_ordering(714) 00:15:31.985 fused_ordering(715) 00:15:31.985 fused_ordering(716) 00:15:31.985 fused_ordering(717) 00:15:31.985 fused_ordering(718) 00:15:31.985 fused_ordering(719) 00:15:31.985 fused_ordering(720) 00:15:31.985 fused_ordering(721) 00:15:31.985 fused_ordering(722) 00:15:31.985 fused_ordering(723) 00:15:31.985 fused_ordering(724) 00:15:31.985 fused_ordering(725) 00:15:31.985 fused_ordering(726) 00:15:31.985 fused_ordering(727) 00:15:31.985 fused_ordering(728) 00:15:31.985 fused_ordering(729) 00:15:31.985 fused_ordering(730) 00:15:31.985 fused_ordering(731) 00:15:31.985 fused_ordering(732) 00:15:31.985 fused_ordering(733) 00:15:31.985 fused_ordering(734) 00:15:31.985 fused_ordering(735) 00:15:31.985 fused_ordering(736) 00:15:31.985 fused_ordering(737) 00:15:31.985 fused_ordering(738) 00:15:31.985 fused_ordering(739) 00:15:31.985 fused_ordering(740) 00:15:31.985 fused_ordering(741) 00:15:31.985 fused_ordering(742) 00:15:31.985 fused_ordering(743) 00:15:31.985 fused_ordering(744) 00:15:31.985 fused_ordering(745) 00:15:31.985 fused_ordering(746) 00:15:31.985 fused_ordering(747) 00:15:31.985 fused_ordering(748) 00:15:31.985 fused_ordering(749) 00:15:31.985 fused_ordering(750) 00:15:31.985 fused_ordering(751) 00:15:31.985 fused_ordering(752) 00:15:31.985 fused_ordering(753) 00:15:31.985 fused_ordering(754) 00:15:31.985 fused_ordering(755) 00:15:31.985 fused_ordering(756) 00:15:31.985 fused_ordering(757) 00:15:31.985 fused_ordering(758) 00:15:31.985 fused_ordering(759) 00:15:31.985 fused_ordering(760) 00:15:31.985 fused_ordering(761) 00:15:31.985 fused_ordering(762) 00:15:31.985 fused_ordering(763) 00:15:31.985 fused_ordering(764) 00:15:31.985 fused_ordering(765) 00:15:31.985 fused_ordering(766) 00:15:31.985 fused_ordering(767) 00:15:31.985 fused_ordering(768) 00:15:31.985 fused_ordering(769) 00:15:31.985 fused_ordering(770) 00:15:31.985 fused_ordering(771) 00:15:31.985 fused_ordering(772) 00:15:31.985 fused_ordering(773) 00:15:31.985 fused_ordering(774) 00:15:31.985 fused_ordering(775) 00:15:31.985 fused_ordering(776) 00:15:31.985 fused_ordering(777) 00:15:31.985 fused_ordering(778) 00:15:31.985 fused_ordering(779) 00:15:31.985 fused_ordering(780) 00:15:31.985 fused_ordering(781) 00:15:31.985 fused_ordering(782) 00:15:31.985 fused_ordering(783) 00:15:31.985 fused_ordering(784) 00:15:31.985 fused_ordering(785) 00:15:31.985 fused_ordering(786) 00:15:31.985 fused_ordering(787) 00:15:31.985 fused_ordering(788) 00:15:31.985 fused_ordering(789) 00:15:31.985 fused_ordering(790) 00:15:31.985 fused_ordering(791) 00:15:31.985 fused_ordering(792) 00:15:31.985 fused_ordering(793) 00:15:31.985 fused_ordering(794) 00:15:31.985 fused_ordering(795) 00:15:31.985 fused_ordering(796) 00:15:31.986 fused_ordering(797) 00:15:31.986 fused_ordering(798) 00:15:31.986 fused_ordering(799) 00:15:31.986 fused_ordering(800) 00:15:31.986 fused_ordering(801) 00:15:31.986 fused_ordering(802) 00:15:31.986 fused_ordering(803) 00:15:31.986 fused_ordering(804) 00:15:31.986 fused_ordering(805) 00:15:31.986 fused_ordering(806) 00:15:31.986 fused_ordering(807) 00:15:31.986 fused_ordering(808) 00:15:31.986 fused_ordering(809) 00:15:31.986 fused_ordering(810) 00:15:31.986 fused_ordering(811) 00:15:31.986 fused_ordering(812) 00:15:31.986 fused_ordering(813) 00:15:31.986 fused_ordering(814) 00:15:31.986 fused_ordering(815) 00:15:31.986 fused_ordering(816) 00:15:31.986 fused_ordering(817) 00:15:31.986 fused_ordering(818) 00:15:31.986 fused_ordering(819) 00:15:31.986 fused_ordering(820) 00:15:32.930 fused_ordering(821) 00:15:32.930 fused_ordering(822) 00:15:32.930 fused_ordering(823) 00:15:32.930 fused_ordering(824) 00:15:32.930 fused_ordering(825) 00:15:32.930 fused_ordering(826) 00:15:32.930 fused_ordering(827) 00:15:32.930 fused_ordering(828) 00:15:32.930 fused_ordering(829) 00:15:32.930 fused_ordering(830) 00:15:32.930 fused_ordering(831) 00:15:32.930 fused_ordering(832) 00:15:32.930 fused_ordering(833) 00:15:32.930 fused_ordering(834) 00:15:32.930 fused_ordering(835) 00:15:32.930 fused_ordering(836) 00:15:32.930 fused_ordering(837) 00:15:32.930 fused_ordering(838) 00:15:32.930 fused_ordering(839) 00:15:32.930 fused_ordering(840) 00:15:32.930 fused_ordering(841) 00:15:32.930 fused_ordering(842) 00:15:32.930 fused_ordering(843) 00:15:32.930 fused_ordering(844) 00:15:32.930 fused_ordering(845) 00:15:32.930 fused_ordering(846) 00:15:32.930 fused_ordering(847) 00:15:32.930 fused_ordering(848) 00:15:32.930 fused_ordering(849) 00:15:32.930 fused_ordering(850) 00:15:32.930 fused_ordering(851) 00:15:32.930 fused_ordering(852) 00:15:32.930 fused_ordering(853) 00:15:32.930 fused_ordering(854) 00:15:32.930 fused_ordering(855) 00:15:32.930 fused_ordering(856) 00:15:32.930 fused_ordering(857) 00:15:32.930 fused_ordering(858) 00:15:32.930 fused_ordering(859) 00:15:32.930 fused_ordering(860) 00:15:32.930 fused_ordering(861) 00:15:32.930 fused_ordering(862) 00:15:32.930 fused_ordering(863) 00:15:32.930 fused_ordering(864) 00:15:32.930 fused_ordering(865) 00:15:32.930 fused_ordering(866) 00:15:32.930 fused_ordering(867) 00:15:32.930 fused_ordering(868) 00:15:32.930 fused_ordering(869) 00:15:32.930 fused_ordering(870) 00:15:32.930 fused_ordering(871) 00:15:32.930 fused_ordering(872) 00:15:32.930 fused_ordering(873) 00:15:32.930 fused_ordering(874) 00:15:32.930 fused_ordering(875) 00:15:32.930 fused_ordering(876) 00:15:32.930 fused_ordering(877) 00:15:32.930 fused_ordering(878) 00:15:32.930 fused_ordering(879) 00:15:32.930 fused_ordering(880) 00:15:32.930 fused_ordering(881) 00:15:32.930 fused_ordering(882) 00:15:32.930 fused_ordering(883) 00:15:32.930 fused_ordering(884) 00:15:32.930 fused_ordering(885) 00:15:32.930 fused_ordering(886) 00:15:32.930 fused_ordering(887) 00:15:32.930 fused_ordering(888) 00:15:32.930 fused_ordering(889) 00:15:32.930 fused_ordering(890) 00:15:32.930 fused_ordering(891) 00:15:32.930 fused_ordering(892) 00:15:32.930 fused_ordering(893) 00:15:32.930 fused_ordering(894) 00:15:32.930 fused_ordering(895) 00:15:32.930 fused_ordering(896) 00:15:32.930 fused_ordering(897) 00:15:32.930 fused_ordering(898) 00:15:32.930 fused_ordering(899) 00:15:32.930 fused_ordering(900) 00:15:32.930 fused_ordering(901) 00:15:32.930 fused_ordering(902) 00:15:32.930 fused_ordering(903) 00:15:32.930 fused_ordering(904) 00:15:32.930 fused_ordering(905) 00:15:32.930 fused_ordering(906) 00:15:32.930 fused_ordering(907) 00:15:32.930 fused_ordering(908) 00:15:32.930 fused_ordering(909) 00:15:32.930 fused_ordering(910) 00:15:32.930 fused_ordering(911) 00:15:32.930 fused_ordering(912) 00:15:32.930 fused_ordering(913) 00:15:32.930 fused_ordering(914) 00:15:32.931 fused_ordering(915) 00:15:32.931 fused_ordering(916) 00:15:32.931 fused_ordering(917) 00:15:32.931 fused_ordering(918) 00:15:32.931 fused_ordering(919) 00:15:32.931 fused_ordering(920) 00:15:32.931 fused_ordering(921) 00:15:32.931 fused_ordering(922) 00:15:32.931 fused_ordering(923) 00:15:32.931 fused_ordering(924) 00:15:32.931 fused_ordering(925) 00:15:32.931 fused_ordering(926) 00:15:32.931 fused_ordering(927) 00:15:32.931 fused_ordering(928) 00:15:32.931 fused_ordering(929) 00:15:32.931 fused_ordering(930) 00:15:32.931 fused_ordering(931) 00:15:32.931 fused_ordering(932) 00:15:32.931 fused_ordering(933) 00:15:32.931 fused_ordering(934) 00:15:32.931 fused_ordering(935) 00:15:32.931 fused_ordering(936) 00:15:32.931 fused_ordering(937) 00:15:32.931 fused_ordering(938) 00:15:32.931 fused_ordering(939) 00:15:32.931 fused_ordering(940) 00:15:32.931 fused_ordering(941) 00:15:32.931 fused_ordering(942) 00:15:32.931 fused_ordering(943) 00:15:32.931 fused_ordering(944) 00:15:32.931 fused_ordering(945) 00:15:32.931 fused_ordering(946) 00:15:32.931 fused_ordering(947) 00:15:32.931 fused_ordering(948) 00:15:32.931 fused_ordering(949) 00:15:32.931 fused_ordering(950) 00:15:32.931 fused_ordering(951) 00:15:32.931 fused_ordering(952) 00:15:32.931 fused_ordering(953) 00:15:32.931 fused_ordering(954) 00:15:32.931 fused_ordering(955) 00:15:32.931 fused_ordering(956) 00:15:32.931 fused_ordering(957) 00:15:32.931 fused_ordering(958) 00:15:32.931 fused_ordering(959) 00:15:32.931 fused_ordering(960) 00:15:32.931 fused_ordering(961) 00:15:32.931 fused_ordering(962) 00:15:32.931 fused_ordering(963) 00:15:32.931 fused_ordering(964) 00:15:32.931 fused_ordering(965) 00:15:32.931 fused_ordering(966) 00:15:32.931 fused_ordering(967) 00:15:32.931 fused_ordering(968) 00:15:32.931 fused_ordering(969) 00:15:32.931 fused_ordering(970) 00:15:32.931 fused_ordering(971) 00:15:32.931 fused_ordering(972) 00:15:32.931 fused_ordering(973) 00:15:32.931 fused_ordering(974) 00:15:32.931 fused_ordering(975) 00:15:32.931 fused_ordering(976) 00:15:32.931 fused_ordering(977) 00:15:32.931 fused_ordering(978) 00:15:32.931 fused_ordering(979) 00:15:32.931 fused_ordering(980) 00:15:32.931 fused_ordering(981) 00:15:32.931 fused_ordering(982) 00:15:32.931 fused_ordering(983) 00:15:32.931 fused_ordering(984) 00:15:32.931 fused_ordering(985) 00:15:32.931 fused_ordering(986) 00:15:32.931 fused_ordering(987) 00:15:32.931 fused_ordering(988) 00:15:32.931 fused_ordering(989) 00:15:32.931 fused_ordering(990) 00:15:32.931 fused_ordering(991) 00:15:32.931 fused_ordering(992) 00:15:32.931 fused_ordering(993) 00:15:32.931 fused_ordering(994) 00:15:32.931 fused_ordering(995) 00:15:32.931 fused_ordering(996) 00:15:32.931 fused_ordering(997) 00:15:32.931 fused_ordering(998) 00:15:32.931 fused_ordering(999) 00:15:32.931 fused_ordering(1000) 00:15:32.931 fused_ordering(1001) 00:15:32.931 fused_ordering(1002) 00:15:32.931 fused_ordering(1003) 00:15:32.931 fused_ordering(1004) 00:15:32.931 fused_ordering(1005) 00:15:32.931 fused_ordering(1006) 00:15:32.931 fused_ordering(1007) 00:15:32.931 fused_ordering(1008) 00:15:32.931 fused_ordering(1009) 00:15:32.931 fused_ordering(1010) 00:15:32.931 fused_ordering(1011) 00:15:32.931 fused_ordering(1012) 00:15:32.931 fused_ordering(1013) 00:15:32.931 fused_ordering(1014) 00:15:32.931 fused_ordering(1015) 00:15:32.931 fused_ordering(1016) 00:15:32.931 fused_ordering(1017) 00:15:32.931 fused_ordering(1018) 00:15:32.931 fused_ordering(1019) 00:15:32.931 fused_ordering(1020) 00:15:32.931 fused_ordering(1021) 00:15:32.931 fused_ordering(1022) 00:15:32.931 fused_ordering(1023) 00:15:32.931 13:26:30 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:32.931 13:26:30 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:32.931 13:26:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:32.931 13:26:30 -- nvmf/common.sh@116 -- # sync 00:15:32.931 13:26:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:32.931 13:26:30 -- nvmf/common.sh@119 -- # set +e 00:15:32.931 13:26:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:32.931 13:26:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:32.931 rmmod nvme_tcp 00:15:32.931 rmmod nvme_fabrics 00:15:32.931 rmmod nvme_keyring 00:15:32.931 13:26:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:32.931 13:26:30 -- nvmf/common.sh@123 -- # set -e 00:15:32.931 13:26:30 -- nvmf/common.sh@124 -- # return 0 00:15:32.931 13:26:30 -- nvmf/common.sh@477 -- # '[' -n 898332 ']' 00:15:32.931 13:26:30 -- nvmf/common.sh@478 -- # killprocess 898332 00:15:32.931 13:26:30 -- common/autotest_common.sh@926 -- # '[' -z 898332 ']' 00:15:32.931 13:26:30 -- common/autotest_common.sh@930 -- # kill -0 898332 00:15:32.931 13:26:30 -- common/autotest_common.sh@931 -- # uname 00:15:32.931 13:26:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.931 13:26:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 898332 00:15:32.931 13:26:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:32.931 13:26:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:32.931 13:26:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 898332' 00:15:32.931 killing process with pid 898332 00:15:32.931 13:26:30 -- common/autotest_common.sh@945 -- # kill 898332 00:15:32.931 13:26:30 -- common/autotest_common.sh@950 -- # wait 898332 00:15:32.931 13:26:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:32.931 13:26:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:32.931 13:26:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:32.931 13:26:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.931 13:26:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:32.931 13:26:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.931 13:26:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.931 13:26:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.482 13:26:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:35.482 00:15:35.482 real 0m13.747s 00:15:35.482 user 0m7.831s 00:15:35.482 sys 0m7.632s 00:15:35.482 13:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.482 13:26:32 -- common/autotest_common.sh@10 -- # set +x 00:15:35.482 ************************************ 00:15:35.482 END TEST nvmf_fused_ordering 00:15:35.482 ************************************ 00:15:35.482 13:26:32 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:35.482 13:26:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:35.482 13:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:35.482 13:26:32 -- common/autotest_common.sh@10 -- # set +x 00:15:35.482 ************************************ 00:15:35.482 START TEST nvmf_delete_subsystem 00:15:35.482 ************************************ 00:15:35.482 13:26:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:35.482 * Looking for test storage... 00:15:35.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.482 13:26:32 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.482 13:26:32 -- nvmf/common.sh@7 -- # uname -s 00:15:35.482 13:26:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.482 13:26:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.482 13:26:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.482 13:26:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.482 13:26:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.482 13:26:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.482 13:26:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.482 13:26:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.482 13:26:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.482 13:26:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.482 13:26:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.482 13:26:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:35.482 13:26:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.482 13:26:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.482 13:26:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.482 13:26:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.482 13:26:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.482 13:26:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.482 13:26:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.482 13:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.483 13:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.483 13:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.483 13:26:32 -- paths/export.sh@5 -- # export PATH 00:15:35.483 13:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.483 13:26:32 -- nvmf/common.sh@46 -- # : 0 00:15:35.483 13:26:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:35.483 13:26:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:35.483 13:26:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:35.483 13:26:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.483 13:26:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.483 13:26:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:35.483 13:26:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:35.483 13:26:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:35.483 13:26:32 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:35.483 13:26:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:35.483 13:26:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.483 13:26:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:35.483 13:26:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:35.483 13:26:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:35.483 13:26:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.483 13:26:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.483 13:26:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.483 13:26:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:35.483 13:26:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:35.483 13:26:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:35.483 13:26:32 -- common/autotest_common.sh@10 -- # set +x 00:15:42.075 13:26:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.075 13:26:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:42.076 13:26:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:42.076 13:26:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:42.076 13:26:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:42.076 13:26:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:42.076 13:26:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:42.076 13:26:39 -- nvmf/common.sh@294 -- # net_devs=() 00:15:42.076 13:26:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:42.076 13:26:39 -- nvmf/common.sh@295 -- # e810=() 00:15:42.076 13:26:39 -- nvmf/common.sh@295 -- # local -ga e810 00:15:42.076 13:26:39 -- nvmf/common.sh@296 -- # x722=() 00:15:42.076 13:26:39 -- nvmf/common.sh@296 -- # local -ga x722 00:15:42.076 13:26:39 -- nvmf/common.sh@297 -- # mlx=() 00:15:42.076 13:26:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:42.076 13:26:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.076 13:26:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:42.076 13:26:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:42.076 13:26:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.076 13:26:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:42.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:42.076 13:26:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.076 13:26:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:42.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:42.076 13:26:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.076 13:26:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.076 13:26:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.076 13:26:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:42.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:42.076 13:26:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.076 13:26:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.076 13:26:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.076 13:26:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.076 13:26:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:42.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:42.076 13:26:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.076 13:26:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:42.076 13:26:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:42.076 13:26:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:42.076 13:26:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.076 13:26:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.076 13:26:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.076 13:26:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:42.076 13:26:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.076 13:26:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.076 13:26:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:42.076 13:26:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.076 13:26:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.076 13:26:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:42.076 13:26:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:42.076 13:26:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.076 13:26:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.076 13:26:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.076 13:26:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.076 13:26:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:42.076 13:26:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.338 13:26:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.338 13:26:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.338 13:26:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:42.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:15:42.338 00:15:42.338 --- 10.0.0.2 ping statistics --- 00:15:42.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.338 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:15:42.338 13:26:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:15:42.338 00:15:42.338 --- 10.0.0.1 ping statistics --- 00:15:42.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.338 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:15:42.338 13:26:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.338 13:26:39 -- nvmf/common.sh@410 -- # return 0 00:15:42.338 13:26:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.338 13:26:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.338 13:26:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.338 13:26:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.338 13:26:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.338 13:26:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.338 13:26:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.338 13:26:39 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:42.338 13:26:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.338 13:26:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.338 13:26:39 -- common/autotest_common.sh@10 -- # set +x 00:15:42.338 13:26:39 -- nvmf/common.sh@469 -- # nvmfpid=903377 00:15:42.338 13:26:39 -- nvmf/common.sh@470 -- # waitforlisten 903377 00:15:42.338 13:26:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:42.338 13:26:39 -- common/autotest_common.sh@819 -- # '[' -z 903377 ']' 00:15:42.338 13:26:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.338 13:26:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.338 13:26:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.338 13:26:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.338 13:26:39 -- common/autotest_common.sh@10 -- # set +x 00:15:42.338 [2024-07-26 13:26:39.744753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:42.338 [2024-07-26 13:26:39.744835] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.338 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.599 [2024-07-26 13:26:39.818908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.599 [2024-07-26 13:26:39.855873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.599 [2024-07-26 13:26:39.856027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.599 [2024-07-26 13:26:39.856037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.599 [2024-07-26 13:26:39.856045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.599 [2024-07-26 13:26:39.856196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.599 [2024-07-26 13:26:39.856197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.171 13:26:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.171 13:26:40 -- common/autotest_common.sh@852 -- # return 0 00:15:43.171 13:26:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.171 13:26:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 13:26:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 [2024-07-26 13:26:40.535469] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 [2024-07-26 13:26:40.551641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 NULL1 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 Delay0 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:43.171 13:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.171 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.171 13:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@28 -- # perf_pid=903725 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:43.171 13:26:40 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:43.171 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.171 [2024-07-26 13:26:40.636344] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:45.718 13:26:42 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.718 13:26:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.718 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 [2024-07-26 13:26:42.803967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c680 is same with the state(5) to be set 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Write completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 starting I/O failed: -6 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.718 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 starting I/O failed: -6 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 starting I/O failed: -6 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 starting I/O failed: -6 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 starting I/O failed: -6 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 starting I/O failed: -6 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 [2024-07-26 13:26:42.806580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b94000c00 is same with the state(5) to be set 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Write completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:45.719 Read completed with error (sct=0, sc=8) 00:15:46.698 [2024-07-26 13:26:43.778607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1290da0 is same with the state(5) to be set 00:15:46.698 Read completed with error (sct=0, sc=8) 00:15:46.698 Read completed with error (sct=0, sc=8) 00:15:46.698 Read completed with error (sct=0, sc=8) 00:15:46.698 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 [2024-07-26 13:26:43.807475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128e600 is same with the state(5) to be set 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 [2024-07-26 13:26:43.808081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128eb60 is same with the state(5) to be set 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 [2024-07-26 13:26:43.808546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b9400c600 is same with the state(5) to be set 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 Read completed with error (sct=0, sc=8) 00:15:46.699 Write completed with error (sct=0, sc=8) 00:15:46.699 [2024-07-26 13:26:43.808651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b9400bf20 is same with the state(5) to be set 00:15:46.699 [2024-07-26 13:26:43.809125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290da0 (9): Bad file descriptor 00:15:46.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:46.699 13:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.699 13:26:43 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:46.699 13:26:43 -- target/delete_subsystem.sh@35 -- # kill -0 903725 00:15:46.699 13:26:43 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:46.699 Initializing NVMe Controllers 00:15:46.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:46.699 Controller IO queue size 128, less than required. 00:15:46.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:46.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:46.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:46.699 Initialization complete. Launching workers. 00:15:46.699 ======================================================== 00:15:46.699 Latency(us) 00:15:46.699 Device Information : IOPS MiB/s Average min max 00:15:46.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.59 0.09 882160.69 254.45 1008506.96 00:15:46.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.69 0.08 934626.76 268.27 2001098.80 00:15:46.699 ======================================================== 00:15:46.699 Total : 331.28 0.16 906818.16 254.45 2001098.80 00:15:46.699 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@35 -- # kill -0 903725 00:15:46.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (903725) - No such process 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@45 -- # NOT wait 903725 00:15:46.959 13:26:44 -- common/autotest_common.sh@640 -- # local es=0 00:15:46.959 13:26:44 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 903725 00:15:46.959 13:26:44 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:46.959 13:26:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.959 13:26:44 -- common/autotest_common.sh@632 -- # type -t wait 00:15:46.959 13:26:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:46.959 13:26:44 -- common/autotest_common.sh@643 -- # wait 903725 00:15:46.959 13:26:44 -- common/autotest_common.sh@643 -- # es=1 00:15:46.959 13:26:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:46.959 13:26:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:46.959 13:26:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:46.959 13:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.959 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:46.959 13:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.959 13:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.959 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:46.959 [2024-07-26 13:26:44.339224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.959 13:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.959 13:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.959 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:46.959 13:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@54 -- # perf_pid=904414 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:46.959 13:26:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:46.959 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.959 [2024-07-26 13:26:44.407293] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:47.530 13:26:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:47.530 13:26:44 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:47.530 13:26:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:48.101 13:26:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:48.101 13:26:45 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:48.101 13:26:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:48.671 13:26:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:48.671 13:26:45 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:48.671 13:26:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:48.931 13:26:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:48.931 13:26:46 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:48.931 13:26:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:49.502 13:26:46 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:49.502 13:26:46 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:49.502 13:26:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:50.073 13:26:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:50.073 13:26:47 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:50.073 13:26:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:50.334 Initializing NVMe Controllers 00:15:50.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.334 Controller IO queue size 128, less than required. 00:15:50.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:50.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:50.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:50.334 Initialization complete. Launching workers. 00:15:50.334 ======================================================== 00:15:50.334 Latency(us) 00:15:50.334 Device Information : IOPS MiB/s Average min max 00:15:50.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003390.96 1000330.55 1008307.77 00:15:50.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003277.96 1000225.61 1009665.69 00:15:50.334 ======================================================== 00:15:50.334 Total : 256.00 0.12 1003334.46 1000225.61 1009665.69 00:15:50.334 00:15:50.596 13:26:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:50.596 13:26:47 -- target/delete_subsystem.sh@57 -- # kill -0 904414 00:15:50.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (904414) - No such process 00:15:50.596 13:26:47 -- target/delete_subsystem.sh@67 -- # wait 904414 00:15:50.596 13:26:47 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:50.596 13:26:47 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:50.596 13:26:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.596 13:26:47 -- nvmf/common.sh@116 -- # sync 00:15:50.596 13:26:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.596 13:26:47 -- nvmf/common.sh@119 -- # set +e 00:15:50.596 13:26:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.596 13:26:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.596 rmmod nvme_tcp 00:15:50.596 rmmod nvme_fabrics 00:15:50.596 rmmod nvme_keyring 00:15:50.596 13:26:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.596 13:26:47 -- nvmf/common.sh@123 -- # set -e 00:15:50.596 13:26:47 -- nvmf/common.sh@124 -- # return 0 00:15:50.596 13:26:47 -- nvmf/common.sh@477 -- # '[' -n 903377 ']' 00:15:50.596 13:26:47 -- nvmf/common.sh@478 -- # killprocess 903377 00:15:50.596 13:26:47 -- common/autotest_common.sh@926 -- # '[' -z 903377 ']' 00:15:50.596 13:26:47 -- common/autotest_common.sh@930 -- # kill -0 903377 00:15:50.596 13:26:47 -- common/autotest_common.sh@931 -- # uname 00:15:50.596 13:26:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.596 13:26:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 903377 00:15:50.596 13:26:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:50.596 13:26:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:50.596 13:26:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 903377' 00:15:50.596 killing process with pid 903377 00:15:50.596 13:26:48 -- common/autotest_common.sh@945 -- # kill 903377 00:15:50.596 13:26:48 -- common/autotest_common.sh@950 -- # wait 903377 00:15:50.857 13:26:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.857 13:26:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.857 13:26:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.857 13:26:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.857 13:26:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.857 13:26:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.857 13:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.857 13:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.774 13:26:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:52.774 00:15:52.774 real 0m17.795s 00:15:52.774 user 0m30.735s 00:15:52.774 sys 0m6.160s 00:15:52.774 13:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.774 13:26:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.774 ************************************ 00:15:52.774 END TEST nvmf_delete_subsystem 00:15:52.774 ************************************ 00:15:52.774 13:26:50 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:52.774 13:26:50 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:52.774 13:26:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:52.774 13:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:52.774 13:26:50 -- common/autotest_common.sh@10 -- # set +x 00:15:52.774 ************************************ 00:15:52.774 START TEST nvmf_nvme_cli 00:15:52.774 ************************************ 00:15:52.774 13:26:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.036 * Looking for test storage... 00:15:53.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.036 13:26:50 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.036 13:26:50 -- nvmf/common.sh@7 -- # uname -s 00:15:53.036 13:26:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.036 13:26:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.036 13:26:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.036 13:26:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.036 13:26:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.036 13:26:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.036 13:26:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.036 13:26:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.036 13:26:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.036 13:26:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.036 13:26:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.036 13:26:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.036 13:26:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.036 13:26:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.036 13:26:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.036 13:26:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.036 13:26:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.036 13:26:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.036 13:26:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.036 13:26:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.036 13:26:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.036 13:26:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.036 13:26:50 -- paths/export.sh@5 -- # export PATH 00:15:53.036 13:26:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.036 13:26:50 -- nvmf/common.sh@46 -- # : 0 00:15:53.036 13:26:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.036 13:26:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.036 13:26:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.036 13:26:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.036 13:26:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.036 13:26:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.036 13:26:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.036 13:26:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.036 13:26:50 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.036 13:26:50 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.036 13:26:50 -- target/nvme_cli.sh@14 -- # devs=() 00:15:53.036 13:26:50 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:53.036 13:26:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:53.036 13:26:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.036 13:26:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:53.036 13:26:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:53.036 13:26:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:53.036 13:26:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.036 13:26:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.036 13:26:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.036 13:26:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:53.036 13:26:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:53.036 13:26:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:53.036 13:26:50 -- common/autotest_common.sh@10 -- # set +x 00:15:59.628 13:26:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:59.628 13:26:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:59.628 13:26:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:59.628 13:26:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:59.628 13:26:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:59.628 13:26:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:59.628 13:26:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:59.628 13:26:56 -- nvmf/common.sh@294 -- # net_devs=() 00:15:59.628 13:26:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:59.628 13:26:56 -- nvmf/common.sh@295 -- # e810=() 00:15:59.628 13:26:56 -- nvmf/common.sh@295 -- # local -ga e810 00:15:59.628 13:26:56 -- nvmf/common.sh@296 -- # x722=() 00:15:59.628 13:26:56 -- nvmf/common.sh@296 -- # local -ga x722 00:15:59.628 13:26:56 -- nvmf/common.sh@297 -- # mlx=() 00:15:59.628 13:26:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:59.628 13:26:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.628 13:26:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:59.628 13:26:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:59.628 13:26:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:59.628 13:26:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:59.628 13:26:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:59.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:59.628 13:26:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:59.628 13:26:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:59.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:59.628 13:26:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:59.628 13:26:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:59.628 13:26:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:59.628 13:26:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.628 13:26:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:59.628 13:26:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.628 13:26:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:59.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:59.628 13:26:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.628 13:26:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:59.628 13:26:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.628 13:26:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:59.628 13:26:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.628 13:26:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:59.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:59.628 13:26:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.628 13:26:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:59.628 13:26:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:59.628 13:26:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:59.628 13:26:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:59.628 13:26:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:59.628 13:26:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.628 13:26:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.628 13:26:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.628 13:26:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:59.628 13:26:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.628 13:26:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.628 13:26:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:59.628 13:26:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.628 13:26:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.628 13:26:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:59.628 13:26:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:59.628 13:26:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.628 13:26:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.890 13:26:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.890 13:26:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.890 13:26:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:59.890 13:26:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.890 13:26:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.890 13:26:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.890 13:26:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:59.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:15:59.890 00:15:59.890 --- 10.0.0.2 ping statistics --- 00:15:59.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.890 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:15:59.890 13:26:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.463 ms 00:15:59.890 00:15:59.890 --- 10.0.0.1 ping statistics --- 00:15:59.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.890 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:15:59.890 13:26:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.890 13:26:57 -- nvmf/common.sh@410 -- # return 0 00:15:59.890 13:26:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:59.890 13:26:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.890 13:26:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:59.890 13:26:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:59.890 13:26:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.890 13:26:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:59.890 13:26:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:00.151 13:26:57 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:00.151 13:26:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.151 13:26:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:00.151 13:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:00.151 13:26:57 -- nvmf/common.sh@469 -- # nvmfpid=909245 00:16:00.151 13:26:57 -- nvmf/common.sh@470 -- # waitforlisten 909245 00:16:00.151 13:26:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:00.151 13:26:57 -- common/autotest_common.sh@819 -- # '[' -z 909245 ']' 00:16:00.151 13:26:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.151 13:26:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:00.151 13:26:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.151 13:26:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:00.151 13:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:00.151 [2024-07-26 13:26:57.445756] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:00.151 [2024-07-26 13:26:57.445820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.151 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.151 [2024-07-26 13:26:57.520442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.151 [2024-07-26 13:26:57.558720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.151 [2024-07-26 13:26:57.558876] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.151 [2024-07-26 13:26:57.558887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.151 [2024-07-26 13:26:57.558896] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.151 [2024-07-26 13:26:57.559043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.151 [2024-07-26 13:26:57.559165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.151 [2024-07-26 13:26:57.559327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.151 [2024-07-26 13:26:57.559327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.095 13:26:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:01.095 13:26:58 -- common/autotest_common.sh@852 -- # return 0 00:16:01.095 13:26:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:01.095 13:26:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 13:26:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.095 13:26:58 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 [2024-07-26 13:26:58.258436] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 Malloc0 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 Malloc1 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 [2024-07-26 13:26:58.348149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.095 13:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.095 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:01.095 13:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.095 13:26:58 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:01.095 00:16:01.095 Discovery Log Number of Records 2, Generation counter 2 00:16:01.095 =====Discovery Log Entry 0====== 00:16:01.095 trtype: tcp 00:16:01.095 adrfam: ipv4 00:16:01.095 subtype: current discovery subsystem 00:16:01.095 treq: not required 00:16:01.095 portid: 0 00:16:01.095 trsvcid: 4420 00:16:01.095 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:01.095 traddr: 10.0.0.2 00:16:01.095 eflags: explicit discovery connections, duplicate discovery information 00:16:01.095 sectype: none 00:16:01.095 =====Discovery Log Entry 1====== 00:16:01.095 trtype: tcp 00:16:01.095 adrfam: ipv4 00:16:01.095 subtype: nvme subsystem 00:16:01.095 treq: not required 00:16:01.095 portid: 0 00:16:01.095 trsvcid: 4420 00:16:01.095 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:01.095 traddr: 10.0.0.2 00:16:01.095 eflags: none 00:16:01.095 sectype: none 00:16:01.095 13:26:58 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:01.095 13:26:58 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:01.095 13:26:58 -- nvmf/common.sh@510 -- # local dev _ 00:16:01.095 13:26:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:01.095 13:26:58 -- nvmf/common.sh@509 -- # nvme list 00:16:01.095 13:26:58 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:01.095 13:26:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:01.095 13:26:58 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.095 13:26:58 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:01.095 13:26:58 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:01.095 13:26:58 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.012 13:27:00 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:03.012 13:27:00 -- common/autotest_common.sh@1177 -- # local i=0 00:16:03.012 13:27:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.012 13:27:00 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:03.012 13:27:00 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:03.012 13:27:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:04.928 13:27:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:04.928 13:27:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:04.929 13:27:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.929 13:27:02 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:04.929 13:27:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.929 13:27:02 -- common/autotest_common.sh@1187 -- # return 0 00:16:04.929 13:27:02 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:04.929 13:27:02 -- nvmf/common.sh@510 -- # local dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@509 -- # nvme list 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:04.929 /dev/nvme0n1 ]] 00:16:04.929 13:27:02 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:04.929 13:27:02 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:04.929 13:27:02 -- nvmf/common.sh@510 -- # local dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@509 -- # nvme list 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:04.929 13:27:02 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:04.929 13:27:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:04.929 13:27:02 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:04.929 13:27:02 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.929 13:27:02 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.929 13:27:02 -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.929 13:27:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:04.929 13:27:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.929 13:27:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:04.929 13:27:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.929 13:27:02 -- common/autotest_common.sh@1210 -- # return 0 00:16:04.929 13:27:02 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:04.929 13:27:02 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.929 13:27:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:04.929 13:27:02 -- common/autotest_common.sh@10 -- # set +x 00:16:04.929 13:27:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:04.929 13:27:02 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:04.929 13:27:02 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:04.929 13:27:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:04.929 13:27:02 -- nvmf/common.sh@116 -- # sync 00:16:04.929 13:27:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:04.929 13:27:02 -- nvmf/common.sh@119 -- # set +e 00:16:04.929 13:27:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:04.929 13:27:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:04.929 rmmod nvme_tcp 00:16:04.929 rmmod nvme_fabrics 00:16:04.929 rmmod nvme_keyring 00:16:04.929 13:27:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:04.929 13:27:02 -- nvmf/common.sh@123 -- # set -e 00:16:04.929 13:27:02 -- nvmf/common.sh@124 -- # return 0 00:16:04.929 13:27:02 -- nvmf/common.sh@477 -- # '[' -n 909245 ']' 00:16:04.929 13:27:02 -- nvmf/common.sh@478 -- # killprocess 909245 00:16:04.929 13:27:02 -- common/autotest_common.sh@926 -- # '[' -z 909245 ']' 00:16:04.929 13:27:02 -- common/autotest_common.sh@930 -- # kill -0 909245 00:16:04.929 13:27:02 -- common/autotest_common.sh@931 -- # uname 00:16:04.929 13:27:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.929 13:27:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 909245 00:16:04.929 13:27:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.929 13:27:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.929 13:27:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 909245' 00:16:04.929 killing process with pid 909245 00:16:04.929 13:27:02 -- common/autotest_common.sh@945 -- # kill 909245 00:16:04.929 13:27:02 -- common/autotest_common.sh@950 -- # wait 909245 00:16:05.190 13:27:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:05.190 13:27:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:05.190 13:27:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:05.190 13:27:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.190 13:27:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:05.190 13:27:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.190 13:27:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.190 13:27:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.740 13:27:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:07.740 00:16:07.740 real 0m14.352s 00:16:07.740 user 0m21.809s 00:16:07.740 sys 0m5.789s 00:16:07.740 13:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.740 13:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:07.740 ************************************ 00:16:07.740 END TEST nvmf_nvme_cli 00:16:07.740 ************************************ 00:16:07.740 13:27:04 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:07.740 13:27:04 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:07.740 13:27:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:07.740 13:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.740 13:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:07.740 ************************************ 00:16:07.740 START TEST nvmf_vfio_user 00:16:07.740 ************************************ 00:16:07.740 13:27:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:07.740 * Looking for test storage... 00:16:07.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.740 13:27:04 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.740 13:27:04 -- nvmf/common.sh@7 -- # uname -s 00:16:07.740 13:27:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.740 13:27:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.740 13:27:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.740 13:27:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.740 13:27:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.740 13:27:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.740 13:27:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.740 13:27:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.740 13:27:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.740 13:27:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.740 13:27:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.740 13:27:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.740 13:27:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.740 13:27:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.740 13:27:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.740 13:27:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.740 13:27:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.740 13:27:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.740 13:27:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.741 13:27:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.741 13:27:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.741 13:27:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.741 13:27:04 -- paths/export.sh@5 -- # export PATH 00:16:07.741 13:27:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.741 13:27:04 -- nvmf/common.sh@46 -- # : 0 00:16:07.741 13:27:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.741 13:27:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.741 13:27:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.741 13:27:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.741 13:27:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.741 13:27:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.741 13:27:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.741 13:27:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=911034 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 911034' 00:16:07.741 Process pid: 911034 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 911034 00:16:07.741 13:27:04 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:07.741 13:27:04 -- common/autotest_common.sh@819 -- # '[' -z 911034 ']' 00:16:07.741 13:27:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.741 13:27:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.741 13:27:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.741 13:27:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.741 13:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:07.741 [2024-07-26 13:27:04.843422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:07.741 [2024-07-26 13:27:04.843523] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.741 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.741 [2024-07-26 13:27:04.910045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.741 [2024-07-26 13:27:04.939788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.741 [2024-07-26 13:27:04.939929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.741 [2024-07-26 13:27:04.939940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.741 [2024-07-26 13:27:04.939948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.741 [2024-07-26 13:27:04.940091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.741 [2024-07-26 13:27:04.940234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.741 [2024-07-26 13:27:04.940391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.741 [2024-07-26 13:27:04.940391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.314 13:27:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.314 13:27:05 -- common/autotest_common.sh@852 -- # return 0 00:16:08.314 13:27:05 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:09.257 13:27:06 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:09.520 Malloc1 00:16:09.520 13:27:06 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:09.781 13:27:07 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:10.042 13:27:07 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:10.042 13:27:07 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.042 13:27:07 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:10.042 13:27:07 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:10.304 Malloc2 00:16:10.304 13:27:07 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:10.600 13:27:07 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:10.600 13:27:07 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:10.864 13:27:08 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:10.864 [2024-07-26 13:27:08.155124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:10.864 [2024-07-26 13:27:08.155178] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911734 ] 00:16:10.864 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.864 [2024-07-26 13:27:08.187814] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:10.864 [2024-07-26 13:27:08.196463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.864 [2024-07-26 13:27:08.196482] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb2f69de000 00:16:10.864 [2024-07-26 13:27:08.197456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.198460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.199460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.200469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.201467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.202472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.203478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.204490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:10.864 [2024-07-26 13:27:08.205504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:10.864 [2024-07-26 13:27:08.205513] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb2f57a4000 00:16:10.864 [2024-07-26 13:27:08.206839] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.864 [2024-07-26 13:27:08.227353] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:10.864 [2024-07-26 13:27:08.227381] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:10.864 [2024-07-26 13:27:08.229648] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:10.864 [2024-07-26 13:27:08.229697] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:10.864 [2024-07-26 13:27:08.229787] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:10.864 [2024-07-26 13:27:08.229805] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:10.864 [2024-07-26 13:27:08.229811] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:10.864 [2024-07-26 13:27:08.230647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:10.864 [2024-07-26 13:27:08.230656] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:10.864 [2024-07-26 13:27:08.230663] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:10.864 [2024-07-26 13:27:08.231654] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:10.864 [2024-07-26 13:27:08.231662] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:10.864 [2024-07-26 13:27:08.231670] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.232658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:10.864 [2024-07-26 13:27:08.232666] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.233664] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:10.864 [2024-07-26 13:27:08.233672] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:10.864 [2024-07-26 13:27:08.233677] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.233684] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.233789] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:10.864 [2024-07-26 13:27:08.233794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.233799] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:10.864 [2024-07-26 13:27:08.234667] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:10.864 [2024-07-26 13:27:08.235670] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:10.864 [2024-07-26 13:27:08.236675] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:10.864 [2024-07-26 13:27:08.237694] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:10.864 [2024-07-26 13:27:08.238679] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:10.864 [2024-07-26 13:27:08.238687] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:10.864 [2024-07-26 13:27:08.238692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238713] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:10.865 [2024-07-26 13:27:08.238720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238733] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.865 [2024-07-26 13:27:08.238738] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.865 [2024-07-26 13:27:08.238753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.238802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.238812] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:10.865 [2024-07-26 13:27:08.238816] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:10.865 [2024-07-26 13:27:08.238823] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:10.865 [2024-07-26 13:27:08.238827] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:10.865 [2024-07-26 13:27:08.238832] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:10.865 [2024-07-26 13:27:08.238837] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:10.865 [2024-07-26 13:27:08.238841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.238869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.238882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.865 [2024-07-26 13:27:08.238890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.865 [2024-07-26 13:27:08.238898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.865 [2024-07-26 13:27:08.238907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.865 [2024-07-26 13:27:08.238911] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238919] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.238940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.238945] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:10.865 [2024-07-26 13:27:08.238951] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238957] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238965] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.238974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.238981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239047] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239055] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:10.865 [2024-07-26 13:27:08.239063] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:10.865 [2024-07-26 13:27:08.239069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239089] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:10.865 [2024-07-26 13:27:08.239097] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239111] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.865 [2024-07-26 13:27:08.239115] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.865 [2024-07-26 13:27:08.239122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239165] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:10.865 [2024-07-26 13:27:08.239169] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.865 [2024-07-26 13:27:08.239175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239206] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239214] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239225] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239230] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:10.865 [2024-07-26 13:27:08.239234] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:10.865 [2024-07-26 13:27:08.239239] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:10.865 [2024-07-26 13:27:08.239256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:10.865 [2024-07-26 13:27:08.239341] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:10.865 [2024-07-26 13:27:08.239346] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:10.865 [2024-07-26 13:27:08.239349] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:10.865 [2024-07-26 13:27:08.239353] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:10.865 [2024-07-26 13:27:08.239359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:10.865 [2024-07-26 13:27:08.239366] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:10.865 [2024-07-26 13:27:08.239370] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:10.865 [2024-07-26 13:27:08.239376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:10.865 [2024-07-26 13:27:08.239383] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:10.865 [2024-07-26 13:27:08.239387] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:10.866 [2024-07-26 13:27:08.239393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:10.866 [2024-07-26 13:27:08.239401] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:10.866 [2024-07-26 13:27:08.239405] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:10.866 [2024-07-26 13:27:08.239411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:10.866 [2024-07-26 13:27:08.239418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:10.866 [2024-07-26 13:27:08.239431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:10.866 [2024-07-26 13:27:08.239439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:10.866 [2024-07-26 13:27:08.239446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:10.866 ===================================================== 00:16:10.866 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:10.866 ===================================================== 00:16:10.866 Controller Capabilities/Features 00:16:10.866 ================================ 00:16:10.866 Vendor ID: 4e58 00:16:10.866 Subsystem Vendor ID: 4e58 00:16:10.866 Serial Number: SPDK1 00:16:10.866 Model Number: SPDK bdev Controller 00:16:10.866 Firmware Version: 24.01.1 00:16:10.866 Recommended Arb Burst: 6 00:16:10.866 IEEE OUI Identifier: 8d 6b 50 00:16:10.866 Multi-path I/O 00:16:10.866 May have multiple subsystem ports: Yes 00:16:10.866 May have multiple controllers: Yes 00:16:10.866 Associated with SR-IOV VF: No 00:16:10.866 Max Data Transfer Size: 131072 00:16:10.866 Max Number of Namespaces: 32 00:16:10.866 Max Number of I/O Queues: 127 00:16:10.866 NVMe Specification Version (VS): 1.3 00:16:10.866 NVMe Specification Version (Identify): 1.3 00:16:10.866 Maximum Queue Entries: 256 00:16:10.866 Contiguous Queues Required: Yes 00:16:10.866 Arbitration Mechanisms Supported 00:16:10.866 Weighted Round Robin: Not Supported 00:16:10.866 Vendor Specific: Not Supported 00:16:10.866 Reset Timeout: 15000 ms 00:16:10.866 Doorbell Stride: 4 bytes 00:16:10.866 NVM Subsystem Reset: Not Supported 00:16:10.866 Command Sets Supported 00:16:10.866 NVM Command Set: Supported 00:16:10.866 Boot Partition: Not Supported 00:16:10.866 Memory Page Size Minimum: 4096 bytes 00:16:10.866 Memory Page Size Maximum: 4096 bytes 00:16:10.866 Persistent Memory Region: Not Supported 00:16:10.866 Optional Asynchronous Events Supported 00:16:10.866 Namespace Attribute Notices: Supported 00:16:10.866 Firmware Activation Notices: Not Supported 00:16:10.866 ANA Change Notices: Not Supported 00:16:10.866 PLE Aggregate Log Change Notices: Not Supported 00:16:10.866 LBA Status Info Alert Notices: Not Supported 00:16:10.866 EGE Aggregate Log Change Notices: Not Supported 00:16:10.866 Normal NVM Subsystem Shutdown event: Not Supported 00:16:10.866 Zone Descriptor Change Notices: Not Supported 00:16:10.866 Discovery Log Change Notices: Not Supported 00:16:10.866 Controller Attributes 00:16:10.866 128-bit Host Identifier: Supported 00:16:10.866 Non-Operational Permissive Mode: Not Supported 00:16:10.866 NVM Sets: Not Supported 00:16:10.866 Read Recovery Levels: Not Supported 00:16:10.866 Endurance Groups: Not Supported 00:16:10.866 Predictable Latency Mode: Not Supported 00:16:10.866 Traffic Based Keep ALive: Not Supported 00:16:10.866 Namespace Granularity: Not Supported 00:16:10.866 SQ Associations: Not Supported 00:16:10.866 UUID List: Not Supported 00:16:10.866 Multi-Domain Subsystem: Not Supported 00:16:10.866 Fixed Capacity Management: Not Supported 00:16:10.866 Variable Capacity Management: Not Supported 00:16:10.866 Delete Endurance Group: Not Supported 00:16:10.866 Delete NVM Set: Not Supported 00:16:10.866 Extended LBA Formats Supported: Not Supported 00:16:10.866 Flexible Data Placement Supported: Not Supported 00:16:10.866 00:16:10.866 Controller Memory Buffer Support 00:16:10.866 ================================ 00:16:10.866 Supported: No 00:16:10.866 00:16:10.866 Persistent Memory Region Support 00:16:10.866 ================================ 00:16:10.866 Supported: No 00:16:10.866 00:16:10.866 Admin Command Set Attributes 00:16:10.866 ============================ 00:16:10.866 Security Send/Receive: Not Supported 00:16:10.866 Format NVM: Not Supported 00:16:10.866 Firmware Activate/Download: Not Supported 00:16:10.866 Namespace Management: Not Supported 00:16:10.866 Device Self-Test: Not Supported 00:16:10.866 Directives: Not Supported 00:16:10.866 NVMe-MI: Not Supported 00:16:10.866 Virtualization Management: Not Supported 00:16:10.866 Doorbell Buffer Config: Not Supported 00:16:10.866 Get LBA Status Capability: Not Supported 00:16:10.866 Command & Feature Lockdown Capability: Not Supported 00:16:10.866 Abort Command Limit: 4 00:16:10.866 Async Event Request Limit: 4 00:16:10.866 Number of Firmware Slots: N/A 00:16:10.866 Firmware Slot 1 Read-Only: N/A 00:16:10.866 Firmware Activation Without Reset: N/A 00:16:10.866 Multiple Update Detection Support: N/A 00:16:10.866 Firmware Update Granularity: No Information Provided 00:16:10.866 Per-Namespace SMART Log: No 00:16:10.866 Asymmetric Namespace Access Log Page: Not Supported 00:16:10.866 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:10.866 Command Effects Log Page: Supported 00:16:10.866 Get Log Page Extended Data: Supported 00:16:10.866 Telemetry Log Pages: Not Supported 00:16:10.866 Persistent Event Log Pages: Not Supported 00:16:10.866 Supported Log Pages Log Page: May Support 00:16:10.866 Commands Supported & Effects Log Page: Not Supported 00:16:10.866 Feature Identifiers & Effects Log Page:May Support 00:16:10.866 NVMe-MI Commands & Effects Log Page: May Support 00:16:10.866 Data Area 4 for Telemetry Log: Not Supported 00:16:10.866 Error Log Page Entries Supported: 128 00:16:10.866 Keep Alive: Supported 00:16:10.866 Keep Alive Granularity: 10000 ms 00:16:10.866 00:16:10.866 NVM Command Set Attributes 00:16:10.866 ========================== 00:16:10.866 Submission Queue Entry Size 00:16:10.866 Max: 64 00:16:10.866 Min: 64 00:16:10.866 Completion Queue Entry Size 00:16:10.866 Max: 16 00:16:10.866 Min: 16 00:16:10.866 Number of Namespaces: 32 00:16:10.866 Compare Command: Supported 00:16:10.866 Write Uncorrectable Command: Not Supported 00:16:10.866 Dataset Management Command: Supported 00:16:10.866 Write Zeroes Command: Supported 00:16:10.866 Set Features Save Field: Not Supported 00:16:10.866 Reservations: Not Supported 00:16:10.866 Timestamp: Not Supported 00:16:10.866 Copy: Supported 00:16:10.866 Volatile Write Cache: Present 00:16:10.866 Atomic Write Unit (Normal): 1 00:16:10.866 Atomic Write Unit (PFail): 1 00:16:10.866 Atomic Compare & Write Unit: 1 00:16:10.866 Fused Compare & Write: Supported 00:16:10.866 Scatter-Gather List 00:16:10.866 SGL Command Set: Supported (Dword aligned) 00:16:10.866 SGL Keyed: Not Supported 00:16:10.866 SGL Bit Bucket Descriptor: Not Supported 00:16:10.866 SGL Metadata Pointer: Not Supported 00:16:10.866 Oversized SGL: Not Supported 00:16:10.866 SGL Metadata Address: Not Supported 00:16:10.866 SGL Offset: Not Supported 00:16:10.866 Transport SGL Data Block: Not Supported 00:16:10.866 Replay Protected Memory Block: Not Supported 00:16:10.866 00:16:10.866 Firmware Slot Information 00:16:10.866 ========================= 00:16:10.866 Active slot: 1 00:16:10.866 Slot 1 Firmware Revision: 24.01.1 00:16:10.866 00:16:10.866 00:16:10.866 Commands Supported and Effects 00:16:10.866 ============================== 00:16:10.866 Admin Commands 00:16:10.866 -------------- 00:16:10.866 Get Log Page (02h): Supported 00:16:10.866 Identify (06h): Supported 00:16:10.866 Abort (08h): Supported 00:16:10.866 Set Features (09h): Supported 00:16:10.866 Get Features (0Ah): Supported 00:16:10.866 Asynchronous Event Request (0Ch): Supported 00:16:10.866 Keep Alive (18h): Supported 00:16:10.866 I/O Commands 00:16:10.866 ------------ 00:16:10.866 Flush (00h): Supported LBA-Change 00:16:10.866 Write (01h): Supported LBA-Change 00:16:10.866 Read (02h): Supported 00:16:10.866 Compare (05h): Supported 00:16:10.866 Write Zeroes (08h): Supported LBA-Change 00:16:10.866 Dataset Management (09h): Supported LBA-Change 00:16:10.866 Copy (19h): Supported LBA-Change 00:16:10.866 Unknown (79h): Supported LBA-Change 00:16:10.867 Unknown (7Ah): Supported 00:16:10.867 00:16:10.867 Error Log 00:16:10.867 ========= 00:16:10.867 00:16:10.867 Arbitration 00:16:10.867 =========== 00:16:10.867 Arbitration Burst: 1 00:16:10.867 00:16:10.867 Power Management 00:16:10.867 ================ 00:16:10.867 Number of Power States: 1 00:16:10.867 Current Power State: Power State #0 00:16:10.867 Power State #0: 00:16:10.867 Max Power: 0.00 W 00:16:10.867 Non-Operational State: Operational 00:16:10.867 Entry Latency: Not Reported 00:16:10.867 Exit Latency: Not Reported 00:16:10.867 Relative Read Throughput: 0 00:16:10.867 Relative Read Latency: 0 00:16:10.867 Relative Write Throughput: 0 00:16:10.867 Relative Write Latency: 0 00:16:10.867 Idle Power: Not Reported 00:16:10.867 Active Power: Not Reported 00:16:10.867 Non-Operational Permissive Mode: Not Supported 00:16:10.867 00:16:10.867 Health Information 00:16:10.867 ================== 00:16:10.867 Critical Warnings: 00:16:10.867 Available Spare Space: OK 00:16:10.867 Temperature: OK 00:16:10.867 Device Reliability: OK 00:16:10.867 Read Only: No 00:16:10.867 Volatile Memory Backup: OK 00:16:10.867 Current Temperature: 0 Kelvin[2024-07-26 13:27:08.239549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:10.867 [2024-07-26 13:27:08.239560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:10.867 [2024-07-26 13:27:08.239586] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:10.867 [2024-07-26 13:27:08.239596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.867 [2024-07-26 13:27:08.239603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.867 [2024-07-26 13:27:08.239609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.867 [2024-07-26 13:27:08.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.867 [2024-07-26 13:27:08.242207] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:10.867 [2024-07-26 13:27:08.242218] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:10.867 [2024-07-26 13:27:08.242737] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:10.867 [2024-07-26 13:27:08.242743] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:10.867 [2024-07-26 13:27:08.243720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:10.867 [2024-07-26 13:27:08.243731] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:10.867 [2024-07-26 13:27:08.243792] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:10.867 [2024-07-26 13:27:08.245744] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:10.867 (-273 Celsius) 00:16:10.867 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:10.867 Available Spare: 0% 00:16:10.867 Available Spare Threshold: 0% 00:16:10.867 Life Percentage Used: 0% 00:16:10.867 Data Units Read: 0 00:16:10.867 Data Units Written: 0 00:16:10.867 Host Read Commands: 0 00:16:10.867 Host Write Commands: 0 00:16:10.867 Controller Busy Time: 0 minutes 00:16:10.867 Power Cycles: 0 00:16:10.867 Power On Hours: 0 hours 00:16:10.867 Unsafe Shutdowns: 0 00:16:10.867 Unrecoverable Media Errors: 0 00:16:10.867 Lifetime Error Log Entries: 0 00:16:10.867 Warning Temperature Time: 0 minutes 00:16:10.867 Critical Temperature Time: 0 minutes 00:16:10.867 00:16:10.867 Number of Queues 00:16:10.867 ================ 00:16:10.867 Number of I/O Submission Queues: 127 00:16:10.867 Number of I/O Completion Queues: 127 00:16:10.867 00:16:10.867 Active Namespaces 00:16:10.867 ================= 00:16:10.867 Namespace ID:1 00:16:10.867 Error Recovery Timeout: Unlimited 00:16:10.867 Command Set Identifier: NVM (00h) 00:16:10.867 Deallocate: Supported 00:16:10.867 Deallocated/Unwritten Error: Not Supported 00:16:10.867 Deallocated Read Value: Unknown 00:16:10.867 Deallocate in Write Zeroes: Not Supported 00:16:10.867 Deallocated Guard Field: 0xFFFF 00:16:10.867 Flush: Supported 00:16:10.867 Reservation: Supported 00:16:10.867 Namespace Sharing Capabilities: Multiple Controllers 00:16:10.867 Size (in LBAs): 131072 (0GiB) 00:16:10.867 Capacity (in LBAs): 131072 (0GiB) 00:16:10.867 Utilization (in LBAs): 131072 (0GiB) 00:16:10.867 NGUID: FF20F6DA1D1340029D58AB7FAE91F289 00:16:10.867 UUID: ff20f6da-1d13-4002-9d58-ab7fae91f289 00:16:10.867 Thin Provisioning: Not Supported 00:16:10.867 Per-NS Atomic Units: Yes 00:16:10.867 Atomic Boundary Size (Normal): 0 00:16:10.867 Atomic Boundary Size (PFail): 0 00:16:10.867 Atomic Boundary Offset: 0 00:16:10.867 Maximum Single Source Range Length: 65535 00:16:10.867 Maximum Copy Length: 65535 00:16:10.867 Maximum Source Range Count: 1 00:16:10.867 NGUID/EUI64 Never Reused: No 00:16:10.867 Namespace Write Protected: No 00:16:10.867 Number of LBA Formats: 1 00:16:10.867 Current LBA Format: LBA Format #00 00:16:10.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:10.867 00:16:10.867 13:27:08 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:10.867 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.157 Initializing NVMe Controllers 00:16:16.157 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:16.157 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:16.157 Initialization complete. Launching workers. 00:16:16.157 ======================================================== 00:16:16.157 Latency(us) 00:16:16.157 Device Information : IOPS MiB/s Average min max 00:16:16.157 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39951.49 156.06 3203.56 845.79 7254.79 00:16:16.157 ======================================================== 00:16:16.157 Total : 39951.49 156.06 3203.56 845.79 7254.79 00:16:16.157 00:16:16.157 13:27:13 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:16.157 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.449 Initializing NVMe Controllers 00:16:21.449 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:21.449 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:21.449 Initialization complete. Launching workers. 00:16:21.449 ======================================================== 00:16:21.449 Latency(us) 00:16:21.449 Device Information : IOPS MiB/s Average min max 00:16:21.449 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7629.43 7986.65 00:16:21.449 ======================================================== 00:16:21.449 Total : 16051.20 62.70 7980.74 7629.43 7986.65 00:16:21.449 00:16:21.449 13:27:18 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:21.449 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.744 Initializing NVMe Controllers 00:16:26.744 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:26.744 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:26.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:26.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:26.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:26.744 Initialization complete. Launching workers. 00:16:26.744 Starting thread on core 2 00:16:26.744 Starting thread on core 3 00:16:26.744 Starting thread on core 1 00:16:26.744 13:27:23 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:26.744 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.049 Initializing NVMe Controllers 00:16:30.049 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.049 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.049 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:30.049 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:30.049 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:30.049 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:30.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:30.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:30.049 Initialization complete. Launching workers. 00:16:30.049 Starting thread on core 1 with urgent priority queue 00:16:30.049 Starting thread on core 2 with urgent priority queue 00:16:30.049 Starting thread on core 3 with urgent priority queue 00:16:30.049 Starting thread on core 0 with urgent priority queue 00:16:30.049 SPDK bdev Controller (SPDK1 ) core 0: 13256.33 IO/s 7.54 secs/100000 ios 00:16:30.049 SPDK bdev Controller (SPDK1 ) core 1: 11813.33 IO/s 8.47 secs/100000 ios 00:16:30.049 SPDK bdev Controller (SPDK1 ) core 2: 10105.33 IO/s 9.90 secs/100000 ios 00:16:30.049 SPDK bdev Controller (SPDK1 ) core 3: 11694.67 IO/s 8.55 secs/100000 ios 00:16:30.049 ======================================================== 00:16:30.049 00:16:30.049 13:27:27 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:30.049 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.310 Initializing NVMe Controllers 00:16:30.310 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.310 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.310 Namespace ID: 1 size: 0GB 00:16:30.310 Initialization complete. 00:16:30.310 INFO: using host memory buffer for IO 00:16:30.310 Hello world! 00:16:30.310 13:27:27 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:30.310 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.696 Initializing NVMe Controllers 00:16:31.696 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.696 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.696 Initialization complete. Launching workers. 00:16:31.696 submit (in ns) avg, min, max = 6543.8, 3852.5, 4006783.3 00:16:31.696 complete (in ns) avg, min, max = 19434.0, 2341.7, 4003820.0 00:16:31.696 00:16:31.696 Submit histogram 00:16:31.696 ================ 00:16:31.696 Range in us Cumulative Count 00:16:31.696 3.840 - 3.867: 0.5901% ( 112) 00:16:31.696 3.867 - 3.893: 5.1736% ( 870) 00:16:31.696 3.893 - 3.920: 12.8444% ( 1456) 00:16:31.696 3.920 - 3.947: 23.0019% ( 1928) 00:16:31.696 3.947 - 3.973: 34.1974% ( 2125) 00:16:31.696 3.973 - 4.000: 45.8142% ( 2205) 00:16:31.696 4.000 - 4.027: 61.1348% ( 2908) 00:16:31.696 4.027 - 4.053: 77.1298% ( 3036) 00:16:31.696 4.053 - 4.080: 88.5306% ( 2164) 00:16:31.696 4.080 - 4.107: 95.2795% ( 1281) 00:16:31.696 4.107 - 4.133: 98.1824% ( 551) 00:16:31.696 4.133 - 4.160: 99.1623% ( 186) 00:16:31.696 4.160 - 4.187: 99.4363% ( 52) 00:16:31.696 4.187 - 4.213: 99.5100% ( 14) 00:16:31.696 4.213 - 4.240: 99.5206% ( 2) 00:16:31.696 4.293 - 4.320: 99.5258% ( 1) 00:16:31.696 4.480 - 4.507: 99.5311% ( 1) 00:16:31.696 4.640 - 4.667: 99.5364% ( 1) 00:16:31.696 4.667 - 4.693: 99.5416% ( 1) 00:16:31.697 4.747 - 4.773: 99.5469% ( 1) 00:16:31.697 4.880 - 4.907: 99.5522% ( 1) 00:16:31.697 5.013 - 5.040: 99.5575% ( 1) 00:16:31.697 5.147 - 5.173: 99.5680% ( 2) 00:16:31.697 5.333 - 5.360: 99.5733% ( 1) 00:16:31.697 5.920 - 5.947: 99.5838% ( 2) 00:16:31.697 6.000 - 6.027: 99.5891% ( 1) 00:16:31.697 6.027 - 6.053: 99.5996% ( 2) 00:16:31.697 6.133 - 6.160: 99.6049% ( 1) 00:16:31.697 6.160 - 6.187: 99.6154% ( 2) 00:16:31.697 6.187 - 6.213: 99.6207% ( 1) 00:16:31.697 6.293 - 6.320: 99.6259% ( 1) 00:16:31.697 6.373 - 6.400: 99.6365% ( 2) 00:16:31.697 6.427 - 6.453: 99.6417% ( 1) 00:16:31.697 6.453 - 6.480: 99.6470% ( 1) 00:16:31.697 6.507 - 6.533: 99.6523% ( 1) 00:16:31.697 6.533 - 6.560: 99.6628% ( 2) 00:16:31.697 6.560 - 6.587: 99.6681% ( 1) 00:16:31.697 6.747 - 6.773: 99.6997% ( 6) 00:16:31.697 6.773 - 6.800: 99.7155% ( 3) 00:16:31.697 6.827 - 6.880: 99.7313% ( 3) 00:16:31.697 6.880 - 6.933: 99.7366% ( 1) 00:16:31.697 6.933 - 6.987: 99.7524% ( 3) 00:16:31.697 6.987 - 7.040: 99.7682% ( 3) 00:16:31.697 7.040 - 7.093: 99.7787% ( 2) 00:16:31.697 7.147 - 7.200: 99.7893% ( 2) 00:16:31.697 7.200 - 7.253: 99.7998% ( 2) 00:16:31.697 7.253 - 7.307: 99.8156% ( 3) 00:16:31.697 7.307 - 7.360: 99.8261% ( 2) 00:16:31.697 7.360 - 7.413: 99.8314% ( 1) 00:16:31.697 7.413 - 7.467: 99.8367% ( 1) 00:16:31.697 7.467 - 7.520: 99.8419% ( 1) 00:16:31.697 7.520 - 7.573: 99.8578% ( 3) 00:16:31.697 7.573 - 7.627: 99.8630% ( 1) 00:16:31.697 7.733 - 7.787: 99.8736% ( 2) 00:16:31.697 7.787 - 7.840: 99.8894% ( 3) 00:16:31.697 7.893 - 7.947: 99.8999% ( 2) 00:16:31.697 8.160 - 8.213: 99.9052% ( 1) 00:16:31.697 8.267 - 8.320: 99.9104% ( 1) 00:16:31.697 8.533 - 8.587: 99.9157% ( 1) 00:16:31.697 8.907 - 8.960: 99.9210% ( 1) 00:16:31.697 9.547 - 9.600: 99.9262% ( 1) 00:16:31.697 9.600 - 9.653: 99.9315% ( 1) 00:16:31.697 92.160 - 92.587: 99.9368% ( 1) 00:16:31.697 3986.773 - 4014.080: 100.0000% ( 12) 00:16:31.697 00:16:31.697 Complete histogram 00:16:31.697 ================== 00:16:31.697 Range in us Cumulative Count 00:16:31.697 2.333 - 2.347: 0.0053% ( 1) 00:16:31.697 2.347 - 2.360: 0.0211% ( 3) 00:16:31.697 2.360 - 2.373: 0.8324% ( 154) 00:16:31.697 2.373 - 2.387: 1.2223% ( 74) 00:16:31.697 2.387 - 2.400: 1.4119% ( 36) 00:16:31.697 2.400 - 2.413: 21.2054% ( 3757) 00:16:31.697 2.413 - 2.427: 58.3162% ( 7044) 00:16:31.697 2.427 - 2.440: 74.7221% ( 3114) 00:16:31.697 2.440 - 2.453: 88.4621% ( 2608) 00:16:31.697 2.453 - 2.467: 94.1837% ( 1086) 00:16:31.697 2.467 - 2.480: 96.0908% ( 362) 00:16:31.697 2.480 - 2.493: 97.5133% ( 270) 00:16:31.697 2.493 - 2.507: 98.4511% ( 178) 00:16:31.697 2.507 - 2.520: 98.9990% ( 104) 00:16:31.697 2.520 - 2.533: 99.2413% ( 46) 00:16:31.697 2.533 - 2.547: 99.3362% ( 18) 00:16:31.697 2.547 - 2.560: 99.3573% ( 4) 00:16:31.697 2.560 - 2.573: 99.3678% ( 2) 00:16:31.697 2.627 - 2.640: 99.3731% ( 1) 00:16:31.697 4.587 - 4.613: 99.3783% ( 1) 00:16:31.697 4.773 - 4.800: 99.3836% ( 1) 00:16:31.697 4.827 - 4.853: 99.3889% ( 1) 00:16:31.697 4.880 - 4.907: 99.3994% ( 2) 00:16:31.697 4.933 - 4.960: 99.4099% ( 2) 00:16:31.697 4.960 - 4.987: 99.4152% ( 1) 00:16:31.697 5.067 - 5.093: 99.4257% ( 2) 00:16:31.697 5.093 - 5.120: 99.4310% ( 1) 00:16:31.697 5.120 - 5.147: 99.4363% ( 1) 00:16:31.697 5.147 - 5.173: 99.4468% ( 2) 00:16:31.697 5.173 - 5.200: 99.4574% ( 2) 00:16:31.697 5.227 - 5.253: 99.4626% ( 1) 00:16:31.697 5.280 - 5.307: 99.4784% ( 3) 00:16:31.697 5.413 - 5.440: 99.4942% ( 3) 00:16:31.697 5.440 - 5.467: 99.5048% ( 2) 00:16:31.697 5.547 - 5.573: 99.5100% ( 1) 00:16:31.697 5.627 - 5.653: 99.5206% ( 2) 00:16:31.697 5.653 - 5.680: 99.5258% ( 1) 00:16:31.697 5.707 - 5.733: 99.5311% ( 1) 00:16:31.697 5.787 - 5.813: 99.5364% ( 1) 00:16:31.697 6.027 - 6.053: 99.5416% ( 1) 00:16:31.697 6.053 - 6.080: 99.5469% ( 1) 00:16:31.697 6.187 - 6.213: 99.5522% ( 1) 00:16:31.697 7.573 - 7.627: 99.5575% ( 1) 00:16:31.697 8.000 - 8.053: 99.5627% ( 1) 00:16:31.697 34.773 - 34.987: 99.5680% ( 1) 00:16:31.697 55.893 - 56.320: 99.5733% ( 1) 00:16:31.697 2990.080 - 3003.733: 99.5785% ( 1) 00:16:31.697 3986.773 - 4014.080: 100.0000% ( 80) 00:16:31.697 00:16:31.697 13:27:28 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:31.697 13:27:28 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:31.697 13:27:28 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:31.697 13:27:28 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:31.697 13:27:28 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.697 [2024-07-26 13:27:28.978327] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:31.697 [ 00:16:31.697 { 00:16:31.697 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:31.697 "subtype": "Discovery", 00:16:31.697 "listen_addresses": [], 00:16:31.697 "allow_any_host": true, 00:16:31.697 "hosts": [] 00:16:31.697 }, 00:16:31.697 { 00:16:31.697 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:31.697 "subtype": "NVMe", 00:16:31.697 "listen_addresses": [ 00:16:31.697 { 00:16:31.697 "transport": "VFIOUSER", 00:16:31.697 "trtype": "VFIOUSER", 00:16:31.697 "adrfam": "IPv4", 00:16:31.697 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:31.697 "trsvcid": "0" 00:16:31.697 } 00:16:31.697 ], 00:16:31.697 "allow_any_host": true, 00:16:31.697 "hosts": [], 00:16:31.697 "serial_number": "SPDK1", 00:16:31.697 "model_number": "SPDK bdev Controller", 00:16:31.697 "max_namespaces": 32, 00:16:31.697 "min_cntlid": 1, 00:16:31.697 "max_cntlid": 65519, 00:16:31.697 "namespaces": [ 00:16:31.697 { 00:16:31.697 "nsid": 1, 00:16:31.697 "bdev_name": "Malloc1", 00:16:31.697 "name": "Malloc1", 00:16:31.697 "nguid": "FF20F6DA1D1340029D58AB7FAE91F289", 00:16:31.697 "uuid": "ff20f6da-1d13-4002-9d58-ab7fae91f289" 00:16:31.697 } 00:16:31.697 ] 00:16:31.697 }, 00:16:31.697 { 00:16:31.697 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:31.697 "subtype": "NVMe", 00:16:31.697 "listen_addresses": [ 00:16:31.697 { 00:16:31.697 "transport": "VFIOUSER", 00:16:31.697 "trtype": "VFIOUSER", 00:16:31.697 "adrfam": "IPv4", 00:16:31.697 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:31.697 "trsvcid": "0" 00:16:31.697 } 00:16:31.697 ], 00:16:31.697 "allow_any_host": true, 00:16:31.697 "hosts": [], 00:16:31.697 "serial_number": "SPDK2", 00:16:31.697 "model_number": "SPDK bdev Controller", 00:16:31.697 "max_namespaces": 32, 00:16:31.697 "min_cntlid": 1, 00:16:31.697 "max_cntlid": 65519, 00:16:31.697 "namespaces": [ 00:16:31.697 { 00:16:31.697 "nsid": 1, 00:16:31.697 "bdev_name": "Malloc2", 00:16:31.697 "name": "Malloc2", 00:16:31.697 "nguid": "47AD9146B2F84EB49C89FCE40ADDAFA6", 00:16:31.697 "uuid": "47ad9146-b2f8-4eb4-9c89-fce40addafa6" 00:16:31.697 } 00:16:31.697 ] 00:16:31.697 } 00:16:31.697 ] 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@34 -- # aerpid=916271 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:31.697 13:27:29 -- common/autotest_common.sh@1244 -- # local i=0 00:16:31.697 13:27:29 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.697 13:27:29 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:31.697 13:27:29 -- common/autotest_common.sh@1255 -- # return 0 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:31.697 13:27:29 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:31.697 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.958 Malloc3 00:16:31.958 13:27:29 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:31.958 13:27:29 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:31.959 Asynchronous Event Request test 00:16:31.959 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.959 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:31.959 Registering asynchronous event callbacks... 00:16:31.959 Starting namespace attribute notice tests for all controllers... 00:16:31.959 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:31.959 aer_cb - Changed Namespace 00:16:31.959 Cleaning up... 00:16:32.221 [ 00:16:32.221 { 00:16:32.221 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:32.221 "subtype": "Discovery", 00:16:32.221 "listen_addresses": [], 00:16:32.221 "allow_any_host": true, 00:16:32.221 "hosts": [] 00:16:32.221 }, 00:16:32.221 { 00:16:32.221 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:32.221 "subtype": "NVMe", 00:16:32.221 "listen_addresses": [ 00:16:32.221 { 00:16:32.221 "transport": "VFIOUSER", 00:16:32.221 "trtype": "VFIOUSER", 00:16:32.221 "adrfam": "IPv4", 00:16:32.221 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:32.221 "trsvcid": "0" 00:16:32.221 } 00:16:32.221 ], 00:16:32.221 "allow_any_host": true, 00:16:32.221 "hosts": [], 00:16:32.221 "serial_number": "SPDK1", 00:16:32.221 "model_number": "SPDK bdev Controller", 00:16:32.221 "max_namespaces": 32, 00:16:32.221 "min_cntlid": 1, 00:16:32.221 "max_cntlid": 65519, 00:16:32.221 "namespaces": [ 00:16:32.221 { 00:16:32.221 "nsid": 1, 00:16:32.221 "bdev_name": "Malloc1", 00:16:32.221 "name": "Malloc1", 00:16:32.221 "nguid": "FF20F6DA1D1340029D58AB7FAE91F289", 00:16:32.221 "uuid": "ff20f6da-1d13-4002-9d58-ab7fae91f289" 00:16:32.221 }, 00:16:32.221 { 00:16:32.221 "nsid": 2, 00:16:32.221 "bdev_name": "Malloc3", 00:16:32.221 "name": "Malloc3", 00:16:32.221 "nguid": "60D09ECF5B2E4737B4C81CFF6F59AA06", 00:16:32.221 "uuid": "60d09ecf-5b2e-4737-b4c8-1cff6f59aa06" 00:16:32.221 } 00:16:32.221 ] 00:16:32.221 }, 00:16:32.221 { 00:16:32.221 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:32.221 "subtype": "NVMe", 00:16:32.221 "listen_addresses": [ 00:16:32.221 { 00:16:32.221 "transport": "VFIOUSER", 00:16:32.221 "trtype": "VFIOUSER", 00:16:32.221 "adrfam": "IPv4", 00:16:32.221 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:32.221 "trsvcid": "0" 00:16:32.221 } 00:16:32.221 ], 00:16:32.221 "allow_any_host": true, 00:16:32.221 "hosts": [], 00:16:32.221 "serial_number": "SPDK2", 00:16:32.221 "model_number": "SPDK bdev Controller", 00:16:32.221 "max_namespaces": 32, 00:16:32.221 "min_cntlid": 1, 00:16:32.221 "max_cntlid": 65519, 00:16:32.221 "namespaces": [ 00:16:32.221 { 00:16:32.221 "nsid": 1, 00:16:32.221 "bdev_name": "Malloc2", 00:16:32.221 "name": "Malloc2", 00:16:32.221 "nguid": "47AD9146B2F84EB49C89FCE40ADDAFA6", 00:16:32.221 "uuid": "47ad9146-b2f8-4eb4-9c89-fce40addafa6" 00:16:32.221 } 00:16:32.221 ] 00:16:32.221 } 00:16:32.221 ] 00:16:32.221 13:27:29 -- target/nvmf_vfio_user.sh@44 -- # wait 916271 00:16:32.221 13:27:29 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:32.221 13:27:29 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:32.221 13:27:29 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:32.221 13:27:29 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:32.221 [2024-07-26 13:27:29.539190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:32.221 [2024-07-26 13:27:29.539238] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916285 ] 00:16:32.221 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.221 [2024-07-26 13:27:29.570749] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:32.221 [2024-07-26 13:27:29.579383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:32.221 [2024-07-26 13:27:29.579404] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f42c618a000 00:16:32.221 [2024-07-26 13:27:29.580378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.581389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.582394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.583398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.584406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.585411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.586415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.587425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:32.221 [2024-07-26 13:27:29.588433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:32.221 [2024-07-26 13:27:29.588442] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f42c4f50000 00:16:32.221 [2024-07-26 13:27:29.589767] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:32.221 [2024-07-26 13:27:29.605964] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:32.221 [2024-07-26 13:27:29.605990] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:32.221 [2024-07-26 13:27:29.611054] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:32.221 [2024-07-26 13:27:29.611096] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:32.221 [2024-07-26 13:27:29.611179] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:32.221 [2024-07-26 13:27:29.611194] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:32.221 [2024-07-26 13:27:29.611203] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:32.221 [2024-07-26 13:27:29.612061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:32.221 [2024-07-26 13:27:29.612070] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:32.221 [2024-07-26 13:27:29.612077] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:32.221 [2024-07-26 13:27:29.613069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:32.221 [2024-07-26 13:27:29.613077] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:32.221 [2024-07-26 13:27:29.613084] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:32.221 [2024-07-26 13:27:29.614079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:32.222 [2024-07-26 13:27:29.614088] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:32.222 [2024-07-26 13:27:29.615080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:32.222 [2024-07-26 13:27:29.615088] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:32.222 [2024-07-26 13:27:29.615093] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:32.222 [2024-07-26 13:27:29.615100] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:32.222 [2024-07-26 13:27:29.615205] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:32.222 [2024-07-26 13:27:29.615210] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:32.222 [2024-07-26 13:27:29.615215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:32.222 [2024-07-26 13:27:29.616090] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:32.222 [2024-07-26 13:27:29.617095] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:32.222 [2024-07-26 13:27:29.618110] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:32.222 [2024-07-26 13:27:29.619136] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:32.222 [2024-07-26 13:27:29.620130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:32.222 [2024-07-26 13:27:29.620138] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:32.222 [2024-07-26 13:27:29.620142] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.620163] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:32.222 [2024-07-26 13:27:29.620171] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.620181] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:32.222 [2024-07-26 13:27:29.620186] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:32.222 [2024-07-26 13:27:29.620198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.628209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.628220] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:32.222 [2024-07-26 13:27:29.628225] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:32.222 [2024-07-26 13:27:29.628229] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:32.222 [2024-07-26 13:27:29.628234] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:32.222 [2024-07-26 13:27:29.628239] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:32.222 [2024-07-26 13:27:29.628244] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:32.222 [2024-07-26 13:27:29.628248] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.628258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.628268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.636205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.636220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.222 [2024-07-26 13:27:29.636228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.222 [2024-07-26 13:27:29.636236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.222 [2024-07-26 13:27:29.636244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.222 [2024-07-26 13:27:29.636249] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.636257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.636268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.644213] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:32.222 [2024-07-26 13:27:29.644218] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.644224] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.644231] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.644240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.652206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.652267] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.652275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.652282] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:32.222 [2024-07-26 13:27:29.652286] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:32.222 [2024-07-26 13:27:29.652292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.660206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.660219] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:32.222 [2024-07-26 13:27:29.660229] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.660237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.660243] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:32.222 [2024-07-26 13:27:29.660248] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:32.222 [2024-07-26 13:27:29.660254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.668205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.668218] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.668226] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.668233] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:32.222 [2024-07-26 13:27:29.668238] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:32.222 [2024-07-26 13:27:29.668246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.672629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.672641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672661] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672715] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:32.222 [2024-07-26 13:27:29.672720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:32.222 [2024-07-26 13:27:29.672725] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:32.222 [2024-07-26 13:27:29.672741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:32.222 [2024-07-26 13:27:29.683208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:32.222 [2024-07-26 13:27:29.683221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:32.223 [2024-07-26 13:27:29.691209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:32.223 [2024-07-26 13:27:29.691224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:32.485 [2024-07-26 13:27:29.699207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:32.485 [2024-07-26 13:27:29.699222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:32.485 [2024-07-26 13:27:29.707206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:32.485 [2024-07-26 13:27:29.707219] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:32.485 [2024-07-26 13:27:29.707224] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:32.485 [2024-07-26 13:27:29.707227] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:32.485 [2024-07-26 13:27:29.707231] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:32.485 [2024-07-26 13:27:29.707237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:32.485 [2024-07-26 13:27:29.707245] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:32.485 [2024-07-26 13:27:29.707249] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:32.485 [2024-07-26 13:27:29.707255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:32.485 [2024-07-26 13:27:29.707265] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:32.485 [2024-07-26 13:27:29.707270] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:32.485 [2024-07-26 13:27:29.707276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:32.485 [2024-07-26 13:27:29.707283] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:32.485 [2024-07-26 13:27:29.707287] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:32.485 [2024-07-26 13:27:29.707293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:32.486 [2024-07-26 13:27:29.715206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:32.486 [2024-07-26 13:27:29.715224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:32.486 [2024-07-26 13:27:29.715233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:32.486 [2024-07-26 13:27:29.715241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:32.486 ===================================================== 00:16:32.486 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:32.486 ===================================================== 00:16:32.486 Controller Capabilities/Features 00:16:32.486 ================================ 00:16:32.486 Vendor ID: 4e58 00:16:32.486 Subsystem Vendor ID: 4e58 00:16:32.486 Serial Number: SPDK2 00:16:32.486 Model Number: SPDK bdev Controller 00:16:32.486 Firmware Version: 24.01.1 00:16:32.486 Recommended Arb Burst: 6 00:16:32.486 IEEE OUI Identifier: 8d 6b 50 00:16:32.486 Multi-path I/O 00:16:32.486 May have multiple subsystem ports: Yes 00:16:32.486 May have multiple controllers: Yes 00:16:32.486 Associated with SR-IOV VF: No 00:16:32.486 Max Data Transfer Size: 131072 00:16:32.486 Max Number of Namespaces: 32 00:16:32.486 Max Number of I/O Queues: 127 00:16:32.486 NVMe Specification Version (VS): 1.3 00:16:32.486 NVMe Specification Version (Identify): 1.3 00:16:32.486 Maximum Queue Entries: 256 00:16:32.486 Contiguous Queues Required: Yes 00:16:32.486 Arbitration Mechanisms Supported 00:16:32.486 Weighted Round Robin: Not Supported 00:16:32.486 Vendor Specific: Not Supported 00:16:32.486 Reset Timeout: 15000 ms 00:16:32.486 Doorbell Stride: 4 bytes 00:16:32.486 NVM Subsystem Reset: Not Supported 00:16:32.486 Command Sets Supported 00:16:32.486 NVM Command Set: Supported 00:16:32.486 Boot Partition: Not Supported 00:16:32.486 Memory Page Size Minimum: 4096 bytes 00:16:32.486 Memory Page Size Maximum: 4096 bytes 00:16:32.486 Persistent Memory Region: Not Supported 00:16:32.486 Optional Asynchronous Events Supported 00:16:32.486 Namespace Attribute Notices: Supported 00:16:32.486 Firmware Activation Notices: Not Supported 00:16:32.486 ANA Change Notices: Not Supported 00:16:32.486 PLE Aggregate Log Change Notices: Not Supported 00:16:32.486 LBA Status Info Alert Notices: Not Supported 00:16:32.486 EGE Aggregate Log Change Notices: Not Supported 00:16:32.486 Normal NVM Subsystem Shutdown event: Not Supported 00:16:32.486 Zone Descriptor Change Notices: Not Supported 00:16:32.486 Discovery Log Change Notices: Not Supported 00:16:32.486 Controller Attributes 00:16:32.486 128-bit Host Identifier: Supported 00:16:32.486 Non-Operational Permissive Mode: Not Supported 00:16:32.486 NVM Sets: Not Supported 00:16:32.486 Read Recovery Levels: Not Supported 00:16:32.486 Endurance Groups: Not Supported 00:16:32.486 Predictable Latency Mode: Not Supported 00:16:32.486 Traffic Based Keep ALive: Not Supported 00:16:32.486 Namespace Granularity: Not Supported 00:16:32.486 SQ Associations: Not Supported 00:16:32.486 UUID List: Not Supported 00:16:32.486 Multi-Domain Subsystem: Not Supported 00:16:32.486 Fixed Capacity Management: Not Supported 00:16:32.486 Variable Capacity Management: Not Supported 00:16:32.486 Delete Endurance Group: Not Supported 00:16:32.486 Delete NVM Set: Not Supported 00:16:32.486 Extended LBA Formats Supported: Not Supported 00:16:32.486 Flexible Data Placement Supported: Not Supported 00:16:32.486 00:16:32.486 Controller Memory Buffer Support 00:16:32.486 ================================ 00:16:32.486 Supported: No 00:16:32.486 00:16:32.486 Persistent Memory Region Support 00:16:32.486 ================================ 00:16:32.486 Supported: No 00:16:32.486 00:16:32.486 Admin Command Set Attributes 00:16:32.486 ============================ 00:16:32.486 Security Send/Receive: Not Supported 00:16:32.486 Format NVM: Not Supported 00:16:32.486 Firmware Activate/Download: Not Supported 00:16:32.486 Namespace Management: Not Supported 00:16:32.486 Device Self-Test: Not Supported 00:16:32.486 Directives: Not Supported 00:16:32.486 NVMe-MI: Not Supported 00:16:32.486 Virtualization Management: Not Supported 00:16:32.486 Doorbell Buffer Config: Not Supported 00:16:32.486 Get LBA Status Capability: Not Supported 00:16:32.486 Command & Feature Lockdown Capability: Not Supported 00:16:32.486 Abort Command Limit: 4 00:16:32.486 Async Event Request Limit: 4 00:16:32.486 Number of Firmware Slots: N/A 00:16:32.486 Firmware Slot 1 Read-Only: N/A 00:16:32.486 Firmware Activation Without Reset: N/A 00:16:32.486 Multiple Update Detection Support: N/A 00:16:32.486 Firmware Update Granularity: No Information Provided 00:16:32.486 Per-Namespace SMART Log: No 00:16:32.486 Asymmetric Namespace Access Log Page: Not Supported 00:16:32.486 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:32.486 Command Effects Log Page: Supported 00:16:32.486 Get Log Page Extended Data: Supported 00:16:32.486 Telemetry Log Pages: Not Supported 00:16:32.486 Persistent Event Log Pages: Not Supported 00:16:32.486 Supported Log Pages Log Page: May Support 00:16:32.486 Commands Supported & Effects Log Page: Not Supported 00:16:32.486 Feature Identifiers & Effects Log Page:May Support 00:16:32.486 NVMe-MI Commands & Effects Log Page: May Support 00:16:32.486 Data Area 4 for Telemetry Log: Not Supported 00:16:32.486 Error Log Page Entries Supported: 128 00:16:32.486 Keep Alive: Supported 00:16:32.486 Keep Alive Granularity: 10000 ms 00:16:32.486 00:16:32.486 NVM Command Set Attributes 00:16:32.486 ========================== 00:16:32.486 Submission Queue Entry Size 00:16:32.486 Max: 64 00:16:32.486 Min: 64 00:16:32.486 Completion Queue Entry Size 00:16:32.486 Max: 16 00:16:32.486 Min: 16 00:16:32.486 Number of Namespaces: 32 00:16:32.486 Compare Command: Supported 00:16:32.486 Write Uncorrectable Command: Not Supported 00:16:32.486 Dataset Management Command: Supported 00:16:32.486 Write Zeroes Command: Supported 00:16:32.486 Set Features Save Field: Not Supported 00:16:32.486 Reservations: Not Supported 00:16:32.486 Timestamp: Not Supported 00:16:32.486 Copy: Supported 00:16:32.486 Volatile Write Cache: Present 00:16:32.486 Atomic Write Unit (Normal): 1 00:16:32.486 Atomic Write Unit (PFail): 1 00:16:32.486 Atomic Compare & Write Unit: 1 00:16:32.486 Fused Compare & Write: Supported 00:16:32.486 Scatter-Gather List 00:16:32.486 SGL Command Set: Supported (Dword aligned) 00:16:32.486 SGL Keyed: Not Supported 00:16:32.486 SGL Bit Bucket Descriptor: Not Supported 00:16:32.486 SGL Metadata Pointer: Not Supported 00:16:32.486 Oversized SGL: Not Supported 00:16:32.486 SGL Metadata Address: Not Supported 00:16:32.486 SGL Offset: Not Supported 00:16:32.486 Transport SGL Data Block: Not Supported 00:16:32.486 Replay Protected Memory Block: Not Supported 00:16:32.486 00:16:32.486 Firmware Slot Information 00:16:32.486 ========================= 00:16:32.486 Active slot: 1 00:16:32.486 Slot 1 Firmware Revision: 24.01.1 00:16:32.486 00:16:32.486 00:16:32.486 Commands Supported and Effects 00:16:32.486 ============================== 00:16:32.486 Admin Commands 00:16:32.486 -------------- 00:16:32.486 Get Log Page (02h): Supported 00:16:32.486 Identify (06h): Supported 00:16:32.486 Abort (08h): Supported 00:16:32.486 Set Features (09h): Supported 00:16:32.486 Get Features (0Ah): Supported 00:16:32.486 Asynchronous Event Request (0Ch): Supported 00:16:32.486 Keep Alive (18h): Supported 00:16:32.486 I/O Commands 00:16:32.486 ------------ 00:16:32.486 Flush (00h): Supported LBA-Change 00:16:32.486 Write (01h): Supported LBA-Change 00:16:32.486 Read (02h): Supported 00:16:32.486 Compare (05h): Supported 00:16:32.486 Write Zeroes (08h): Supported LBA-Change 00:16:32.486 Dataset Management (09h): Supported LBA-Change 00:16:32.486 Copy (19h): Supported LBA-Change 00:16:32.486 Unknown (79h): Supported LBA-Change 00:16:32.486 Unknown (7Ah): Supported 00:16:32.486 00:16:32.486 Error Log 00:16:32.486 ========= 00:16:32.486 00:16:32.486 Arbitration 00:16:32.486 =========== 00:16:32.486 Arbitration Burst: 1 00:16:32.486 00:16:32.486 Power Management 00:16:32.486 ================ 00:16:32.486 Number of Power States: 1 00:16:32.486 Current Power State: Power State #0 00:16:32.486 Power State #0: 00:16:32.486 Max Power: 0.00 W 00:16:32.486 Non-Operational State: Operational 00:16:32.486 Entry Latency: Not Reported 00:16:32.487 Exit Latency: Not Reported 00:16:32.487 Relative Read Throughput: 0 00:16:32.487 Relative Read Latency: 0 00:16:32.487 Relative Write Throughput: 0 00:16:32.487 Relative Write Latency: 0 00:16:32.487 Idle Power: Not Reported 00:16:32.487 Active Power: Not Reported 00:16:32.487 Non-Operational Permissive Mode: Not Supported 00:16:32.487 00:16:32.487 Health Information 00:16:32.487 ================== 00:16:32.487 Critical Warnings: 00:16:32.487 Available Spare Space: OK 00:16:32.487 Temperature: OK 00:16:32.487 Device Reliability: OK 00:16:32.487 Read Only: No 00:16:32.487 Volatile Memory Backup: OK 00:16:32.487 Current Temperature: 0 Kelvin[2024-07-26 13:27:29.715342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:32.487 [2024-07-26 13:27:29.723208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:32.487 [2024-07-26 13:27:29.723238] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:32.487 [2024-07-26 13:27:29.723247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.487 [2024-07-26 13:27:29.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.487 [2024-07-26 13:27:29.723260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.487 [2024-07-26 13:27:29.723266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.487 [2024-07-26 13:27:29.723303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:32.487 [2024-07-26 13:27:29.723313] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:32.487 [2024-07-26 13:27:29.724343] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:32.487 [2024-07-26 13:27:29.724351] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:32.487 [2024-07-26 13:27:29.725319] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:32.487 [2024-07-26 13:27:29.725330] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:32.487 [2024-07-26 13:27:29.725380] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:32.487 [2024-07-26 13:27:29.726752] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:32.487 (-273 Celsius) 00:16:32.487 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:32.487 Available Spare: 0% 00:16:32.487 Available Spare Threshold: 0% 00:16:32.487 Life Percentage Used: 0% 00:16:32.487 Data Units Read: 0 00:16:32.487 Data Units Written: 0 00:16:32.487 Host Read Commands: 0 00:16:32.487 Host Write Commands: 0 00:16:32.487 Controller Busy Time: 0 minutes 00:16:32.487 Power Cycles: 0 00:16:32.487 Power On Hours: 0 hours 00:16:32.487 Unsafe Shutdowns: 0 00:16:32.487 Unrecoverable Media Errors: 0 00:16:32.487 Lifetime Error Log Entries: 0 00:16:32.487 Warning Temperature Time: 0 minutes 00:16:32.487 Critical Temperature Time: 0 minutes 00:16:32.487 00:16:32.487 Number of Queues 00:16:32.487 ================ 00:16:32.487 Number of I/O Submission Queues: 127 00:16:32.487 Number of I/O Completion Queues: 127 00:16:32.487 00:16:32.487 Active Namespaces 00:16:32.487 ================= 00:16:32.487 Namespace ID:1 00:16:32.487 Error Recovery Timeout: Unlimited 00:16:32.487 Command Set Identifier: NVM (00h) 00:16:32.487 Deallocate: Supported 00:16:32.487 Deallocated/Unwritten Error: Not Supported 00:16:32.487 Deallocated Read Value: Unknown 00:16:32.487 Deallocate in Write Zeroes: Not Supported 00:16:32.487 Deallocated Guard Field: 0xFFFF 00:16:32.487 Flush: Supported 00:16:32.487 Reservation: Supported 00:16:32.487 Namespace Sharing Capabilities: Multiple Controllers 00:16:32.487 Size (in LBAs): 131072 (0GiB) 00:16:32.487 Capacity (in LBAs): 131072 (0GiB) 00:16:32.487 Utilization (in LBAs): 131072 (0GiB) 00:16:32.487 NGUID: 47AD9146B2F84EB49C89FCE40ADDAFA6 00:16:32.487 UUID: 47ad9146-b2f8-4eb4-9c89-fce40addafa6 00:16:32.487 Thin Provisioning: Not Supported 00:16:32.487 Per-NS Atomic Units: Yes 00:16:32.487 Atomic Boundary Size (Normal): 0 00:16:32.487 Atomic Boundary Size (PFail): 0 00:16:32.487 Atomic Boundary Offset: 0 00:16:32.487 Maximum Single Source Range Length: 65535 00:16:32.487 Maximum Copy Length: 65535 00:16:32.487 Maximum Source Range Count: 1 00:16:32.487 NGUID/EUI64 Never Reused: No 00:16:32.487 Namespace Write Protected: No 00:16:32.487 Number of LBA Formats: 1 00:16:32.487 Current LBA Format: LBA Format #00 00:16:32.487 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:32.487 00:16:32.487 13:27:29 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:32.487 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.778 Initializing NVMe Controllers 00:16:37.778 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:37.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:37.778 Initialization complete. Launching workers. 00:16:37.778 ======================================================== 00:16:37.778 Latency(us) 00:16:37.778 Device Information : IOPS MiB/s Average min max 00:16:37.778 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39963.40 156.11 3205.32 836.32 8814.26 00:16:37.778 ======================================================== 00:16:37.778 Total : 39963.40 156.11 3205.32 836.32 8814.26 00:16:37.778 00:16:37.778 13:27:35 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:37.778 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.099 Initializing NVMe Controllers 00:16:43.099 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:43.099 Initialization complete. Launching workers. 00:16:43.099 ======================================================== 00:16:43.099 Latency(us) 00:16:43.099 Device Information : IOPS MiB/s Average min max 00:16:43.099 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37990.23 148.40 3368.94 1078.48 7434.88 00:16:43.099 ======================================================== 00:16:43.099 Total : 37990.23 148.40 3368.94 1078.48 7434.88 00:16:43.099 00:16:43.099 13:27:40 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:43.099 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.386 Initializing NVMe Controllers 00:16:48.386 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:48.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:48.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:48.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:48.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:48.386 Initialization complete. Launching workers. 00:16:48.386 Starting thread on core 2 00:16:48.386 Starting thread on core 3 00:16:48.386 Starting thread on core 1 00:16:48.386 13:27:45 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:48.386 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.688 Initializing NVMe Controllers 00:16:51.688 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.688 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:51.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:51.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:51.688 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:51.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:51.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:51.688 Initialization complete. Launching workers. 00:16:51.688 Starting thread on core 1 with urgent priority queue 00:16:51.688 Starting thread on core 2 with urgent priority queue 00:16:51.688 Starting thread on core 3 with urgent priority queue 00:16:51.688 Starting thread on core 0 with urgent priority queue 00:16:51.688 SPDK bdev Controller (SPDK2 ) core 0: 11588.67 IO/s 8.63 secs/100000 ios 00:16:51.688 SPDK bdev Controller (SPDK2 ) core 1: 4056.33 IO/s 24.65 secs/100000 ios 00:16:51.688 SPDK bdev Controller (SPDK2 ) core 2: 5596.67 IO/s 17.87 secs/100000 ios 00:16:51.688 SPDK bdev Controller (SPDK2 ) core 3: 7953.33 IO/s 12.57 secs/100000 ios 00:16:51.688 ======================================================== 00:16:51.688 00:16:51.688 13:27:48 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:51.688 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.688 Initializing NVMe Controllers 00:16:51.688 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.688 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.688 Namespace ID: 1 size: 0GB 00:16:51.688 Initialization complete. 00:16:51.688 INFO: using host memory buffer for IO 00:16:51.688 Hello world! 00:16:51.688 13:27:49 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:51.949 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.336 Initializing NVMe Controllers 00:16:53.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.336 Initialization complete. Launching workers. 00:16:53.336 submit (in ns) avg, min, max = 6297.9, 3824.2, 4000319.2 00:16:53.336 complete (in ns) avg, min, max = 19177.4, 2340.0, 3999224.2 00:16:53.336 00:16:53.336 Submit histogram 00:16:53.336 ================ 00:16:53.336 Range in us Cumulative Count 00:16:53.336 3.813 - 3.840: 0.1781% ( 34) 00:16:53.336 3.840 - 3.867: 2.8866% ( 517) 00:16:53.336 3.867 - 3.893: 10.5354% ( 1460) 00:16:53.336 3.893 - 3.920: 20.2221% ( 1849) 00:16:53.336 3.920 - 3.947: 30.4170% ( 1946) 00:16:53.336 3.947 - 3.973: 41.0101% ( 2022) 00:16:53.336 3.973 - 4.000: 53.5310% ( 2390) 00:16:53.336 4.000 - 4.027: 69.4153% ( 3032) 00:16:53.336 4.027 - 4.053: 84.8386% ( 2944) 00:16:53.336 4.053 - 4.080: 94.2739% ( 1801) 00:16:53.336 4.080 - 4.107: 98.0145% ( 714) 00:16:53.336 4.107 - 4.133: 99.0675% ( 201) 00:16:53.336 4.133 - 4.160: 99.2875% ( 42) 00:16:53.336 4.160 - 4.187: 99.3504% ( 12) 00:16:53.336 4.187 - 4.213: 99.3870% ( 7) 00:16:53.336 4.213 - 4.240: 99.4132% ( 5) 00:16:53.336 4.240 - 4.267: 99.4342% ( 4) 00:16:53.336 4.267 - 4.293: 99.4604% ( 5) 00:16:53.336 4.293 - 4.320: 99.4761% ( 3) 00:16:53.336 4.347 - 4.373: 99.4866% ( 2) 00:16:53.336 4.373 - 4.400: 99.4918% ( 1) 00:16:53.336 4.400 - 4.427: 99.5023% ( 2) 00:16:53.336 4.453 - 4.480: 99.5075% ( 1) 00:16:53.336 4.480 - 4.507: 99.5128% ( 1) 00:16:53.336 4.507 - 4.533: 99.5180% ( 1) 00:16:53.336 4.533 - 4.560: 99.5233% ( 1) 00:16:53.336 4.640 - 4.667: 99.5285% ( 1) 00:16:53.336 5.173 - 5.200: 99.5337% ( 1) 00:16:53.336 5.227 - 5.253: 99.5390% ( 1) 00:16:53.336 5.280 - 5.307: 99.5442% ( 1) 00:16:53.336 5.413 - 5.440: 99.5495% ( 1) 00:16:53.336 5.467 - 5.493: 99.5547% ( 1) 00:16:53.336 5.760 - 5.787: 99.5599% ( 1) 00:16:53.336 5.813 - 5.840: 99.5652% ( 1) 00:16:53.336 5.840 - 5.867: 99.5704% ( 1) 00:16:53.336 5.867 - 5.893: 99.5756% ( 1) 00:16:53.336 5.920 - 5.947: 99.5809% ( 1) 00:16:53.336 5.947 - 5.973: 99.6018% ( 4) 00:16:53.336 5.973 - 6.000: 99.6280% ( 5) 00:16:53.336 6.000 - 6.027: 99.6490% ( 4) 00:16:53.336 6.027 - 6.053: 99.6595% ( 2) 00:16:53.336 6.053 - 6.080: 99.6699% ( 2) 00:16:53.336 6.080 - 6.107: 99.6752% ( 1) 00:16:53.336 6.240 - 6.267: 99.6804% ( 1) 00:16:53.336 6.267 - 6.293: 99.6857% ( 1) 00:16:53.336 6.293 - 6.320: 99.6909% ( 1) 00:16:53.336 6.320 - 6.347: 99.7066% ( 3) 00:16:53.336 6.347 - 6.373: 99.7223% ( 3) 00:16:53.336 6.373 - 6.400: 99.7276% ( 1) 00:16:53.336 6.400 - 6.427: 99.7485% ( 4) 00:16:53.336 6.480 - 6.507: 99.7538% ( 1) 00:16:53.336 6.587 - 6.613: 99.7590% ( 1) 00:16:53.336 6.613 - 6.640: 99.7642% ( 1) 00:16:53.336 6.640 - 6.667: 99.7747% ( 2) 00:16:53.336 6.667 - 6.693: 99.7800% ( 1) 00:16:53.336 6.693 - 6.720: 99.7904% ( 2) 00:16:53.336 6.747 - 6.773: 99.8009% ( 2) 00:16:53.336 6.773 - 6.800: 99.8114% ( 2) 00:16:53.337 6.827 - 6.880: 99.8166% ( 1) 00:16:53.337 6.880 - 6.933: 99.8376% ( 4) 00:16:53.337 6.933 - 6.987: 99.8428% ( 1) 00:16:53.337 6.987 - 7.040: 99.8481% ( 1) 00:16:53.337 7.093 - 7.147: 99.8585% ( 2) 00:16:53.337 7.147 - 7.200: 99.8743% ( 3) 00:16:53.337 7.200 - 7.253: 99.8795% ( 1) 00:16:53.337 7.307 - 7.360: 99.8952% ( 3) 00:16:53.337 7.360 - 7.413: 99.9005% ( 1) 00:16:53.337 7.413 - 7.467: 99.9057% ( 1) 00:16:53.337 7.573 - 7.627: 99.9109% ( 1) 00:16:53.337 8.320 - 8.373: 99.9162% ( 1) 00:16:53.337 8.587 - 8.640: 99.9214% ( 1) 00:16:53.337 8.907 - 8.960: 99.9267% ( 1) 00:16:53.337 11.893 - 11.947: 99.9319% ( 1) 00:16:53.337 12.320 - 12.373: 99.9371% ( 1) 00:16:53.337 15.253 - 15.360: 99.9424% ( 1) 00:16:53.337 3986.773 - 4014.080: 100.0000% ( 11) 00:16:53.337 00:16:53.337 Complete histogram 00:16:53.337 ================== 00:16:53.337 Range in us Cumulative Count 00:16:53.337 2.333 - 2.347: 0.0052% ( 1) 00:16:53.337 2.347 - 2.360: 0.0733% ( 13) 00:16:53.337 2.360 - 2.373: 0.8644% ( 151) 00:16:53.337 2.373 - 2.387: 1.2573% ( 75) 00:16:53.337 2.387 - 2.400: 1.4407% ( 35) 00:16:53.337 2.400 - 2.413: 10.7659% ( 1780) 00:16:53.337 2.413 - 2.427: 48.5593% ( 7214) 00:16:53.337 2.427 - 2.440: 68.7814% ( 3860) 00:16:53.337 2.440 - 2.453: 84.7705% ( 3052) 00:16:53.337 2.453 - 2.467: 92.8699% ( 1546) 00:16:53.337 2.467 - 2.480: 95.7146% ( 543) 00:16:53.337 2.480 - 2.493: 97.0819% ( 261) 00:16:53.337 2.493 - 2.507: 98.0773% ( 190) 00:16:53.337 2.507 - 2.520: 98.7427% ( 127) 00:16:53.337 2.520 - 2.533: 99.1041% ( 69) 00:16:53.337 2.533 - 2.547: 99.3032% ( 38) 00:16:53.337 2.547 - 2.560: 99.3189% ( 3) 00:16:53.337 2.560 - 2.573: 99.3294% ( 2) 00:16:53.337 2.573 - 2.587: 99.3399% ( 2) 00:16:53.337 2.587 - 2.600: 99.3504% ( 2) 00:16:53.337 2.600 - 2.613: 99.3556% ( 1) 00:16:53.337 2.613 - 2.627: 99.3661% ( 2) 00:16:53.337 2.733 - 2.747: 99.3713% ( 1) 00:16:53.337 2.867 - 2.880: 99.3766% ( 1) 00:16:53.337 4.107 - 4.133: 99.3818% ( 1) 00:16:53.337 4.133 - 4.160: 99.3870% ( 1) 00:16:53.337 4.267 - 4.293: 99.3923% ( 1) 00:16:53.337 4.293 - 4.320: 99.3975% ( 1) 00:16:53.337 4.373 - 4.400: 99.4028% ( 1) 00:16:53.337 4.427 - 4.453: 99.4080% ( 1) 00:16:53.337 4.480 - 4.507: 99.4132% ( 1) 00:16:53.337 4.533 - 4.560: 99.4185% ( 1) 00:16:53.337 4.560 - 4.587: 99.4237% ( 1) 00:16:53.337 4.613 - 4.640: 99.4290% ( 1) 00:16:53.337 4.640 - 4.667: 99.4342% ( 1) 00:16:53.337 4.720 - 4.747: 99.4394% ( 1) 00:16:53.337 4.800 - 4.827: 99.4447% ( 1) 00:16:53.337 4.853 - 4.880: 99.4499% ( 1) 00:16:53.337 4.880 - 4.907: 99.4552% ( 1) 00:16:53.337 4.907 - 4.933: 99.4604% ( 1) 00:16:53.337 4.933 - 4.960: 99.4709% ( 2) 00:16:53.337 4.960 - 4.987: 99.4761% ( 1) 00:16:53.337 4.987 - 5.013: 99.4813% ( 1) 00:16:53.337 5.013 - 5.040: 99.4866% ( 1) 00:16:53.337 5.040 - 5.067: 99.4918% ( 1) 00:16:53.337 5.093 - 5.120: 99.4971% ( 1) 00:16:53.337 5.120 - 5.147: 99.5023% ( 1) 00:16:53.337 5.147 - 5.173: 99.5128% ( 2) 00:16:53.337 5.200 - 5.227: 99.5180% ( 1) 00:16:53.337 5.253 - 5.280: 99.5233% ( 1) 00:16:53.337 5.440 - 5.467: 99.5285% ( 1) 00:16:53.337 5.627 - 5.653: 99.5337% ( 1) 00:16:53.337 5.867 - 5.893: 99.5390% ( 1) 00:16:53.337 5.893 - 5.920: 99.5442% ( 1) 00:16:53.337 6.000 - 6.027: 99.5495% ( 1) 00:16:53.337 6.587 - 6.613: 99.5547% ( 1) 00:16:53.337 10.507 - 10.560: 99.5599% ( 1) 00:16:53.337 11.360 - 11.413: 99.5652% ( 1) 00:16:53.337 16.640 - 16.747: 99.5704% ( 1) 00:16:53.337 48.000 - 48.213: 99.5756% ( 1) 00:16:53.337 1003.520 - 1010.347: 99.5809% ( 1) 00:16:53.337 2908.160 - 2921.813: 99.5861% ( 1) 00:16:53.337 3986.773 - 4014.080: 100.0000% ( 79) 00:16:53.337 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:53.337 [ 00:16:53.337 { 00:16:53.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:53.337 "subtype": "Discovery", 00:16:53.337 "listen_addresses": [], 00:16:53.337 "allow_any_host": true, 00:16:53.337 "hosts": [] 00:16:53.337 }, 00:16:53.337 { 00:16:53.337 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:53.337 "subtype": "NVMe", 00:16:53.337 "listen_addresses": [ 00:16:53.337 { 00:16:53.337 "transport": "VFIOUSER", 00:16:53.337 "trtype": "VFIOUSER", 00:16:53.337 "adrfam": "IPv4", 00:16:53.337 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:53.337 "trsvcid": "0" 00:16:53.337 } 00:16:53.337 ], 00:16:53.337 "allow_any_host": true, 00:16:53.337 "hosts": [], 00:16:53.337 "serial_number": "SPDK1", 00:16:53.337 "model_number": "SPDK bdev Controller", 00:16:53.337 "max_namespaces": 32, 00:16:53.337 "min_cntlid": 1, 00:16:53.337 "max_cntlid": 65519, 00:16:53.337 "namespaces": [ 00:16:53.337 { 00:16:53.337 "nsid": 1, 00:16:53.337 "bdev_name": "Malloc1", 00:16:53.337 "name": "Malloc1", 00:16:53.337 "nguid": "FF20F6DA1D1340029D58AB7FAE91F289", 00:16:53.337 "uuid": "ff20f6da-1d13-4002-9d58-ab7fae91f289" 00:16:53.337 }, 00:16:53.337 { 00:16:53.337 "nsid": 2, 00:16:53.337 "bdev_name": "Malloc3", 00:16:53.337 "name": "Malloc3", 00:16:53.337 "nguid": "60D09ECF5B2E4737B4C81CFF6F59AA06", 00:16:53.337 "uuid": "60d09ecf-5b2e-4737-b4c8-1cff6f59aa06" 00:16:53.337 } 00:16:53.337 ] 00:16:53.337 }, 00:16:53.337 { 00:16:53.337 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:53.337 "subtype": "NVMe", 00:16:53.337 "listen_addresses": [ 00:16:53.337 { 00:16:53.337 "transport": "VFIOUSER", 00:16:53.337 "trtype": "VFIOUSER", 00:16:53.337 "adrfam": "IPv4", 00:16:53.337 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:53.337 "trsvcid": "0" 00:16:53.337 } 00:16:53.337 ], 00:16:53.337 "allow_any_host": true, 00:16:53.337 "hosts": [], 00:16:53.337 "serial_number": "SPDK2", 00:16:53.337 "model_number": "SPDK bdev Controller", 00:16:53.337 "max_namespaces": 32, 00:16:53.337 "min_cntlid": 1, 00:16:53.337 "max_cntlid": 65519, 00:16:53.337 "namespaces": [ 00:16:53.337 { 00:16:53.337 "nsid": 1, 00:16:53.337 "bdev_name": "Malloc2", 00:16:53.337 "name": "Malloc2", 00:16:53.337 "nguid": "47AD9146B2F84EB49C89FCE40ADDAFA6", 00:16:53.337 "uuid": "47ad9146-b2f8-4eb4-9c89-fce40addafa6" 00:16:53.337 } 00:16:53.337 ] 00:16:53.337 } 00:16:53.337 ] 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@34 -- # aerpid=920492 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:53.337 13:27:50 -- common/autotest_common.sh@1244 -- # local i=0 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:53.337 13:27:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:53.337 13:27:50 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:53.337 13:27:50 -- common/autotest_common.sh@1255 -- # return 0 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:53.337 13:27:50 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:53.337 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.599 Malloc4 00:16:53.599 13:27:50 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:53.599 13:27:51 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:53.599 Asynchronous Event Request test 00:16:53.599 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.599 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:53.599 Registering asynchronous event callbacks... 00:16:53.599 Starting namespace attribute notice tests for all controllers... 00:16:53.599 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:53.599 aer_cb - Changed Namespace 00:16:53.599 Cleaning up... 00:16:53.860 [ 00:16:53.860 { 00:16:53.860 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:53.860 "subtype": "Discovery", 00:16:53.860 "listen_addresses": [], 00:16:53.860 "allow_any_host": true, 00:16:53.860 "hosts": [] 00:16:53.860 }, 00:16:53.860 { 00:16:53.860 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:53.860 "subtype": "NVMe", 00:16:53.860 "listen_addresses": [ 00:16:53.860 { 00:16:53.860 "transport": "VFIOUSER", 00:16:53.860 "trtype": "VFIOUSER", 00:16:53.860 "adrfam": "IPv4", 00:16:53.860 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:53.860 "trsvcid": "0" 00:16:53.860 } 00:16:53.860 ], 00:16:53.860 "allow_any_host": true, 00:16:53.860 "hosts": [], 00:16:53.860 "serial_number": "SPDK1", 00:16:53.860 "model_number": "SPDK bdev Controller", 00:16:53.860 "max_namespaces": 32, 00:16:53.860 "min_cntlid": 1, 00:16:53.860 "max_cntlid": 65519, 00:16:53.860 "namespaces": [ 00:16:53.860 { 00:16:53.860 "nsid": 1, 00:16:53.860 "bdev_name": "Malloc1", 00:16:53.860 "name": "Malloc1", 00:16:53.860 "nguid": "FF20F6DA1D1340029D58AB7FAE91F289", 00:16:53.860 "uuid": "ff20f6da-1d13-4002-9d58-ab7fae91f289" 00:16:53.860 }, 00:16:53.860 { 00:16:53.860 "nsid": 2, 00:16:53.860 "bdev_name": "Malloc3", 00:16:53.860 "name": "Malloc3", 00:16:53.860 "nguid": "60D09ECF5B2E4737B4C81CFF6F59AA06", 00:16:53.860 "uuid": "60d09ecf-5b2e-4737-b4c8-1cff6f59aa06" 00:16:53.860 } 00:16:53.860 ] 00:16:53.860 }, 00:16:53.860 { 00:16:53.860 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:53.860 "subtype": "NVMe", 00:16:53.860 "listen_addresses": [ 00:16:53.860 { 00:16:53.860 "transport": "VFIOUSER", 00:16:53.860 "trtype": "VFIOUSER", 00:16:53.860 "adrfam": "IPv4", 00:16:53.860 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:53.860 "trsvcid": "0" 00:16:53.860 } 00:16:53.860 ], 00:16:53.860 "allow_any_host": true, 00:16:53.860 "hosts": [], 00:16:53.860 "serial_number": "SPDK2", 00:16:53.860 "model_number": "SPDK bdev Controller", 00:16:53.860 "max_namespaces": 32, 00:16:53.860 "min_cntlid": 1, 00:16:53.861 "max_cntlid": 65519, 00:16:53.861 "namespaces": [ 00:16:53.861 { 00:16:53.861 "nsid": 1, 00:16:53.861 "bdev_name": "Malloc2", 00:16:53.861 "name": "Malloc2", 00:16:53.861 "nguid": "47AD9146B2F84EB49C89FCE40ADDAFA6", 00:16:53.861 "uuid": "47ad9146-b2f8-4eb4-9c89-fce40addafa6" 00:16:53.861 }, 00:16:53.861 { 00:16:53.861 "nsid": 2, 00:16:53.861 "bdev_name": "Malloc4", 00:16:53.861 "name": "Malloc4", 00:16:53.861 "nguid": "109ABE5E32D14B5EB94E83DEA6BC16E8", 00:16:53.861 "uuid": "109abe5e-32d1-4b5e-b94e-83dea6bc16e8" 00:16:53.861 } 00:16:53.861 ] 00:16:53.861 } 00:16:53.861 ] 00:16:53.861 13:27:51 -- target/nvmf_vfio_user.sh@44 -- # wait 920492 00:16:53.861 13:27:51 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:53.861 13:27:51 -- target/nvmf_vfio_user.sh@95 -- # killprocess 911034 00:16:53.861 13:27:51 -- common/autotest_common.sh@926 -- # '[' -z 911034 ']' 00:16:53.861 13:27:51 -- common/autotest_common.sh@930 -- # kill -0 911034 00:16:53.861 13:27:51 -- common/autotest_common.sh@931 -- # uname 00:16:53.861 13:27:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.861 13:27:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 911034 00:16:53.861 13:27:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:53.861 13:27:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:53.861 13:27:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 911034' 00:16:53.861 killing process with pid 911034 00:16:53.861 13:27:51 -- common/autotest_common.sh@945 -- # kill 911034 00:16:53.861 [2024-07-26 13:27:51.240048] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:53.861 13:27:51 -- common/autotest_common.sh@950 -- # wait 911034 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=920700 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 920700' 00:16:54.122 Process pid: 920700 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:54.122 13:27:51 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 920700 00:16:54.122 13:27:51 -- common/autotest_common.sh@819 -- # '[' -z 920700 ']' 00:16:54.122 13:27:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.122 13:27:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.122 13:27:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.122 13:27:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.122 13:27:51 -- common/autotest_common.sh@10 -- # set +x 00:16:54.122 [2024-07-26 13:27:51.453165] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:54.122 [2024-07-26 13:27:51.454090] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:54.122 [2024-07-26 13:27:51.454128] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.122 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.122 [2024-07-26 13:27:51.513929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.122 [2024-07-26 13:27:51.542915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.122 [2024-07-26 13:27:51.543054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.122 [2024-07-26 13:27:51.543065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.122 [2024-07-26 13:27:51.543073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.122 [2024-07-26 13:27:51.543236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.122 [2024-07-26 13:27:51.543311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.122 [2024-07-26 13:27:51.543469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.122 [2024-07-26 13:27:51.543470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.384 [2024-07-26 13:27:51.608284] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:54.384 [2024-07-26 13:27:51.608514] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:54.384 [2024-07-26 13:27:51.608575] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:54.384 [2024-07-26 13:27:51.608728] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:54.384 [2024-07-26 13:27:51.608808] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:54.957 13:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.958 13:27:52 -- common/autotest_common.sh@852 -- # return 0 00:16:54.958 13:27:52 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:55.903 13:27:53 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:55.903 13:27:53 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:55.903 13:27:53 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:56.165 13:27:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:56.165 13:27:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:56.165 13:27:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:56.165 Malloc1 00:16:56.165 13:27:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:56.426 13:27:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:56.426 13:27:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:56.687 13:27:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:56.687 13:27:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:56.687 13:27:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:56.949 Malloc2 00:16:56.949 13:27:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:56.949 13:27:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:57.210 13:27:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:57.471 13:27:54 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:57.471 13:27:54 -- target/nvmf_vfio_user.sh@95 -- # killprocess 920700 00:16:57.471 13:27:54 -- common/autotest_common.sh@926 -- # '[' -z 920700 ']' 00:16:57.471 13:27:54 -- common/autotest_common.sh@930 -- # kill -0 920700 00:16:57.471 13:27:54 -- common/autotest_common.sh@931 -- # uname 00:16:57.471 13:27:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.471 13:27:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 920700 00:16:57.471 13:27:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:57.471 13:27:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:57.471 13:27:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 920700' 00:16:57.471 killing process with pid 920700 00:16:57.471 13:27:54 -- common/autotest_common.sh@945 -- # kill 920700 00:16:57.471 13:27:54 -- common/autotest_common.sh@950 -- # wait 920700 00:16:57.471 13:27:54 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:57.471 13:27:54 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:57.471 00:16:57.471 real 0m50.263s 00:16:57.471 user 3m19.506s 00:16:57.471 sys 0m2.946s 00:16:57.471 13:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.471 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:57.471 ************************************ 00:16:57.471 END TEST nvmf_vfio_user 00:16:57.471 ************************************ 00:16:57.733 13:27:54 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:57.733 13:27:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:57.733 13:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.733 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 ************************************ 00:16:57.733 START TEST nvmf_vfio_user_nvme_compliance 00:16:57.733 ************************************ 00:16:57.733 13:27:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:57.733 * Looking for test storage... 00:16:57.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:57.733 13:27:55 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.733 13:27:55 -- nvmf/common.sh@7 -- # uname -s 00:16:57.733 13:27:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.733 13:27:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.733 13:27:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.733 13:27:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.733 13:27:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.733 13:27:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.733 13:27:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.733 13:27:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.733 13:27:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.733 13:27:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.733 13:27:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.733 13:27:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.733 13:27:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.733 13:27:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.733 13:27:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.733 13:27:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.733 13:27:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.733 13:27:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.733 13:27:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.733 13:27:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.733 13:27:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.733 13:27:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.733 13:27:55 -- paths/export.sh@5 -- # export PATH 00:16:57.733 13:27:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.733 13:27:55 -- nvmf/common.sh@46 -- # : 0 00:16:57.733 13:27:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.733 13:27:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.733 13:27:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.733 13:27:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.733 13:27:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.733 13:27:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.733 13:27:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.733 13:27:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.733 13:27:55 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.733 13:27:55 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.733 13:27:55 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:57.733 13:27:55 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:57.733 13:27:55 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:57.733 13:27:55 -- compliance/compliance.sh@20 -- # nvmfpid=921455 00:16:57.733 13:27:55 -- compliance/compliance.sh@21 -- # echo 'Process pid: 921455' 00:16:57.733 Process pid: 921455 00:16:57.733 13:27:55 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:57.733 13:27:55 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:57.733 13:27:55 -- compliance/compliance.sh@24 -- # waitforlisten 921455 00:16:57.733 13:27:55 -- common/autotest_common.sh@819 -- # '[' -z 921455 ']' 00:16:57.733 13:27:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.733 13:27:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:57.733 13:27:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.733 13:27:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:57.733 13:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:57.733 [2024-07-26 13:27:55.143107] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:57.733 [2024-07-26 13:27:55.143183] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.733 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.733 [2024-07-26 13:27:55.205057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.995 [2024-07-26 13:27:55.233690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.995 [2024-07-26 13:27:55.233823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.995 [2024-07-26 13:27:55.233833] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.995 [2024-07-26 13:27:55.233840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.995 [2024-07-26 13:27:55.233982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.995 [2024-07-26 13:27:55.234098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.995 [2024-07-26 13:27:55.234101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.567 13:27:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:58.567 13:27:55 -- common/autotest_common.sh@852 -- # return 0 00:16:58.567 13:27:55 -- compliance/compliance.sh@26 -- # sleep 1 00:16:59.511 13:27:56 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:59.511 13:27:56 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:59.511 13:27:56 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:59.511 13:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.511 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.511 13:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.511 13:27:56 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:59.511 13:27:56 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:59.511 13:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.511 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.511 malloc0 00:16:59.511 13:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.511 13:27:56 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:59.511 13:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.511 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.511 13:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.511 13:27:56 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:59.511 13:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.511 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.511 13:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.511 13:27:56 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:59.511 13:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.511 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:16:59.772 13:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.772 13:27:56 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:59.772 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.772 00:16:59.772 00:16:59.772 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.772 http://cunit.sourceforge.net/ 00:16:59.772 00:16:59.772 00:16:59.772 Suite: nvme_compliance 00:16:59.772 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 13:27:57.158574] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:59.772 [2024-07-26 13:27:57.158608] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:59.772 [2024-07-26 13:27:57.158614] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:59.772 passed 00:17:00.032 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:00.033 Test: admin_identify_ns ...[2024-07-26 13:27:57.414216] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:00.033 [2024-07-26 13:27:57.422221] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:00.033 passed 00:17:00.294 Test: admin_get_features_mandatory_features ...passed 00:17:00.294 Test: admin_get_features_optional_features ...passed 00:17:00.554 Test: admin_set_features_number_of_queues ...passed 00:17:00.554 Test: admin_get_log_page_mandatory_logs ...passed 00:17:00.815 Test: admin_get_log_page_with_lpo ...[2024-07-26 13:27:58.092214] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:00.815 passed 00:17:00.815 Test: fabric_property_get ...passed 00:17:01.078 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 13:27:58.296748] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:01.078 passed 00:17:01.078 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 13:27:58.476212] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:01.078 [2024-07-26 13:27:58.492211] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:01.078 passed 00:17:01.339 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 13:27:58.592516] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:01.339 passed 00:17:01.340 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 13:27:58.763210] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:01.340 [2024-07-26 13:27:58.787211] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:01.601 passed 00:17:01.601 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 13:27:58.887546] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:01.601 [2024-07-26 13:27:58.887571] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:01.601 passed 00:17:01.601 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 13:27:59.074209] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:01.862 [2024-07-26 13:27:59.082213] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:01.862 [2024-07-26 13:27:59.090216] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:01.862 [2024-07-26 13:27:59.098208] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:01.862 passed 00:17:01.862 Test: admin_create_io_sq_verify_pc ...[2024-07-26 13:27:59.233218] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:01.862 passed 00:17:03.250 Test: admin_create_io_qp_max_qps ...[2024-07-26 13:28:00.459210] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:03.512 passed 00:17:03.774 Test: admin_create_io_sq_shared_cq ...[2024-07-26 13:28:01.074213] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:03.774 passed 00:17:03.774 00:17:03.774 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.774 suites 1 1 n/a 0 0 00:17:03.774 tests 18 18 18 0 0 00:17:03.774 asserts 360 360 360 0 n/a 00:17:03.774 00:17:03.774 Elapsed time = 1.658 seconds 00:17:03.774 13:28:01 -- compliance/compliance.sh@42 -- # killprocess 921455 00:17:03.774 13:28:01 -- common/autotest_common.sh@926 -- # '[' -z 921455 ']' 00:17:03.774 13:28:01 -- common/autotest_common.sh@930 -- # kill -0 921455 00:17:03.774 13:28:01 -- common/autotest_common.sh@931 -- # uname 00:17:03.774 13:28:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.774 13:28:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 921455 00:17:03.774 13:28:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.774 13:28:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.774 13:28:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 921455' 00:17:03.774 killing process with pid 921455 00:17:03.774 13:28:01 -- common/autotest_common.sh@945 -- # kill 921455 00:17:03.774 13:28:01 -- common/autotest_common.sh@950 -- # wait 921455 00:17:04.036 13:28:01 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:04.036 13:28:01 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:04.036 00:17:04.036 real 0m6.396s 00:17:04.036 user 0m18.407s 00:17:04.036 sys 0m0.451s 00:17:04.036 13:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.036 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.036 ************************************ 00:17:04.036 END TEST nvmf_vfio_user_nvme_compliance 00:17:04.036 ************************************ 00:17:04.036 13:28:01 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:04.036 13:28:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:04.036 13:28:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.036 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.036 ************************************ 00:17:04.036 START TEST nvmf_vfio_user_fuzz 00:17:04.036 ************************************ 00:17:04.036 13:28:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:04.036 * Looking for test storage... 00:17:04.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.036 13:28:01 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.036 13:28:01 -- nvmf/common.sh@7 -- # uname -s 00:17:04.036 13:28:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.036 13:28:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.036 13:28:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.036 13:28:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.036 13:28:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.036 13:28:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.036 13:28:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.036 13:28:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.036 13:28:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.036 13:28:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.036 13:28:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.036 13:28:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.036 13:28:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.036 13:28:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.036 13:28:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.036 13:28:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.297 13:28:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.297 13:28:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.297 13:28:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.297 13:28:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.297 13:28:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.297 13:28:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.297 13:28:01 -- paths/export.sh@5 -- # export PATH 00:17:04.297 13:28:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.297 13:28:01 -- nvmf/common.sh@46 -- # : 0 00:17:04.297 13:28:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:04.297 13:28:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:04.297 13:28:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:04.297 13:28:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.297 13:28:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.297 13:28:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:04.297 13:28:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:04.297 13:28:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=922856 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 922856' 00:17:04.297 Process pid: 922856 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.297 13:28:01 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 922856 00:17:04.297 13:28:01 -- common/autotest_common.sh@819 -- # '[' -z 922856 ']' 00:17:04.297 13:28:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.297 13:28:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.297 13:28:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.297 13:28:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.297 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:17:04.969 13:28:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.969 13:28:02 -- common/autotest_common.sh@852 -- # return 0 00:17:04.969 13:28:02 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:05.910 13:28:03 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:05.910 13:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.910 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.910 13:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.910 13:28:03 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:05.910 13:28:03 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:05.910 13:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.910 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.910 malloc0 00:17:05.910 13:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.910 13:28:03 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:05.910 13:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:05.910 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:05.910 13:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:05.910 13:28:03 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:06.172 13:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.172 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.172 13:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.172 13:28:03 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:06.172 13:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.172 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:17:06.172 13:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.172 13:28:03 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:06.172 13:28:03 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:38.293 Fuzzing completed. Shutting down the fuzz application 00:17:38.293 00:17:38.293 Dumping successful admin opcodes: 00:17:38.293 8, 9, 10, 24, 00:17:38.293 Dumping successful io opcodes: 00:17:38.293 0, 00:17:38.293 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1308066, total successful commands: 5116, random_seed: 871487168 00:17:38.293 NS: 0x200003a1ef00 admin qp, Total commands completed: 187643, total successful commands: 1507, random_seed: 234345472 00:17:38.293 13:28:33 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:38.293 13:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.293 13:28:33 -- common/autotest_common.sh@10 -- # set +x 00:17:38.293 13:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.293 13:28:33 -- target/vfio_user_fuzz.sh@46 -- # killprocess 922856 00:17:38.293 13:28:33 -- common/autotest_common.sh@926 -- # '[' -z 922856 ']' 00:17:38.294 13:28:33 -- common/autotest_common.sh@930 -- # kill -0 922856 00:17:38.294 13:28:33 -- common/autotest_common.sh@931 -- # uname 00:17:38.294 13:28:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.294 13:28:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 922856 00:17:38.294 13:28:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.294 13:28:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.294 13:28:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 922856' 00:17:38.294 killing process with pid 922856 00:17:38.294 13:28:33 -- common/autotest_common.sh@945 -- # kill 922856 00:17:38.294 13:28:33 -- common/autotest_common.sh@950 -- # wait 922856 00:17:38.294 13:28:33 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:38.294 13:28:33 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:38.294 00:17:38.294 real 0m32.591s 00:17:38.294 user 0m35.934s 00:17:38.294 sys 0m26.319s 00:17:38.294 13:28:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.294 13:28:33 -- common/autotest_common.sh@10 -- # set +x 00:17:38.294 ************************************ 00:17:38.294 END TEST nvmf_vfio_user_fuzz 00:17:38.294 ************************************ 00:17:38.294 13:28:34 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:38.294 13:28:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:38.294 13:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:38.294 13:28:34 -- common/autotest_common.sh@10 -- # set +x 00:17:38.294 ************************************ 00:17:38.294 START TEST nvmf_host_management 00:17:38.294 ************************************ 00:17:38.294 13:28:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:38.294 * Looking for test storage... 00:17:38.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.294 13:28:34 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.294 13:28:34 -- nvmf/common.sh@7 -- # uname -s 00:17:38.294 13:28:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.294 13:28:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.294 13:28:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.294 13:28:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.294 13:28:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.294 13:28:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.294 13:28:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.294 13:28:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.294 13:28:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.294 13:28:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.294 13:28:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.294 13:28:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.294 13:28:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.294 13:28:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.294 13:28:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.294 13:28:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.294 13:28:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.294 13:28:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.294 13:28:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.294 13:28:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.294 13:28:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.294 13:28:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.294 13:28:34 -- paths/export.sh@5 -- # export PATH 00:17:38.294 13:28:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.294 13:28:34 -- nvmf/common.sh@46 -- # : 0 00:17:38.294 13:28:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:38.294 13:28:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:38.294 13:28:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:38.294 13:28:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.294 13:28:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.294 13:28:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:38.294 13:28:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:38.294 13:28:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:38.294 13:28:34 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.294 13:28:34 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.294 13:28:34 -- target/host_management.sh@104 -- # nvmftestinit 00:17:38.294 13:28:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:38.294 13:28:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.294 13:28:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:38.294 13:28:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:38.294 13:28:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:38.294 13:28:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.294 13:28:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.294 13:28:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.294 13:28:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:38.294 13:28:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:38.294 13:28:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:38.294 13:28:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.587 13:28:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:43.587 13:28:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:43.587 13:28:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:43.587 13:28:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:43.587 13:28:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:43.587 13:28:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:43.587 13:28:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:43.587 13:28:40 -- nvmf/common.sh@294 -- # net_devs=() 00:17:43.587 13:28:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:43.587 13:28:40 -- nvmf/common.sh@295 -- # e810=() 00:17:43.587 13:28:40 -- nvmf/common.sh@295 -- # local -ga e810 00:17:43.587 13:28:40 -- nvmf/common.sh@296 -- # x722=() 00:17:43.587 13:28:40 -- nvmf/common.sh@296 -- # local -ga x722 00:17:43.587 13:28:40 -- nvmf/common.sh@297 -- # mlx=() 00:17:43.587 13:28:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:43.587 13:28:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.587 13:28:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:43.587 13:28:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:43.587 13:28:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.587 13:28:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:43.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:43.587 13:28:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:43.587 13:28:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:43.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:43.587 13:28:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.587 13:28:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.587 13:28:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.587 13:28:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:43.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:43.587 13:28:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.587 13:28:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:43.587 13:28:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.587 13:28:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.587 13:28:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:43.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:43.587 13:28:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.587 13:28:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:43.587 13:28:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:43.587 13:28:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:43.587 13:28:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.587 13:28:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.587 13:28:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.587 13:28:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:43.587 13:28:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.587 13:28:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.587 13:28:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:43.587 13:28:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.587 13:28:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.587 13:28:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:43.587 13:28:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:43.587 13:28:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.587 13:28:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.587 13:28:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.587 13:28:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.587 13:28:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:43.587 13:28:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.849 13:28:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.849 13:28:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.849 13:28:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:43.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:17:43.849 00:17:43.849 --- 10.0.0.2 ping statistics --- 00:17:43.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.849 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:17:43.849 13:28:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:17:43.849 00:17:43.849 --- 10.0.0.1 ping statistics --- 00:17:43.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.849 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:17:43.849 13:28:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.849 13:28:41 -- nvmf/common.sh@410 -- # return 0 00:17:43.849 13:28:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.849 13:28:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.849 13:28:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.849 13:28:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.849 13:28:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.849 13:28:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.849 13:28:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.849 13:28:41 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:43.849 13:28:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:43.849 13:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:43.849 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:43.849 ************************************ 00:17:43.849 START TEST nvmf_host_management 00:17:43.849 ************************************ 00:17:43.849 13:28:41 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:43.849 13:28:41 -- target/host_management.sh@69 -- # starttarget 00:17:43.849 13:28:41 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:43.849 13:28:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.849 13:28:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:43.849 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:43.849 13:28:41 -- nvmf/common.sh@469 -- # nvmfpid=932911 00:17:43.849 13:28:41 -- nvmf/common.sh@470 -- # waitforlisten 932911 00:17:43.849 13:28:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:43.849 13:28:41 -- common/autotest_common.sh@819 -- # '[' -z 932911 ']' 00:17:43.849 13:28:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.849 13:28:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.849 13:28:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.849 13:28:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.849 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:17:43.849 [2024-07-26 13:28:41.266404] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:43.849 [2024-07-26 13:28:41.266468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.849 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.111 [2024-07-26 13:28:41.354580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.111 [2024-07-26 13:28:41.401847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.111 [2024-07-26 13:28:41.402004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.111 [2024-07-26 13:28:41.402015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.111 [2024-07-26 13:28:41.402025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.111 [2024-07-26 13:28:41.402183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.111 [2024-07-26 13:28:41.402351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.111 [2024-07-26 13:28:41.402652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.111 [2024-07-26 13:28:41.402655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.685 13:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.685 13:28:42 -- common/autotest_common.sh@852 -- # return 0 00:17:44.685 13:28:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.685 13:28:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:44.685 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 13:28:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.685 13:28:42 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.685 13:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.685 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 [2024-07-26 13:28:42.091436] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.685 13:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.685 13:28:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:44.685 13:28:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:44.685 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 13:28:42 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:44.685 13:28:42 -- target/host_management.sh@23 -- # cat 00:17:44.685 13:28:42 -- target/host_management.sh@30 -- # rpc_cmd 00:17:44.685 13:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.685 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 Malloc0 00:17:44.685 [2024-07-26 13:28:42.150700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.947 13:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:44.947 13:28:42 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:44.947 13:28:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:44.947 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.947 13:28:42 -- target/host_management.sh@73 -- # perfpid=932978 00:17:44.947 13:28:42 -- target/host_management.sh@74 -- # waitforlisten 932978 /var/tmp/bdevperf.sock 00:17:44.947 13:28:42 -- common/autotest_common.sh@819 -- # '[' -z 932978 ']' 00:17:44.947 13:28:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.947 13:28:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.947 13:28:42 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:44.947 13:28:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.947 13:28:42 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:44.947 13:28:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.947 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:17:44.947 13:28:42 -- nvmf/common.sh@520 -- # config=() 00:17:44.947 13:28:42 -- nvmf/common.sh@520 -- # local subsystem config 00:17:44.947 13:28:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:44.947 13:28:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:44.947 { 00:17:44.947 "params": { 00:17:44.947 "name": "Nvme$subsystem", 00:17:44.947 "trtype": "$TEST_TRANSPORT", 00:17:44.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.947 "adrfam": "ipv4", 00:17:44.947 "trsvcid": "$NVMF_PORT", 00:17:44.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.947 "hdgst": ${hdgst:-false}, 00:17:44.947 "ddgst": ${ddgst:-false} 00:17:44.947 }, 00:17:44.947 "method": "bdev_nvme_attach_controller" 00:17:44.947 } 00:17:44.947 EOF 00:17:44.947 )") 00:17:44.947 13:28:42 -- nvmf/common.sh@542 -- # cat 00:17:44.947 13:28:42 -- nvmf/common.sh@544 -- # jq . 00:17:44.947 13:28:42 -- nvmf/common.sh@545 -- # IFS=, 00:17:44.947 13:28:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:44.947 "params": { 00:17:44.947 "name": "Nvme0", 00:17:44.947 "trtype": "tcp", 00:17:44.947 "traddr": "10.0.0.2", 00:17:44.947 "adrfam": "ipv4", 00:17:44.947 "trsvcid": "4420", 00:17:44.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:44.947 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:44.947 "hdgst": false, 00:17:44.947 "ddgst": false 00:17:44.947 }, 00:17:44.947 "method": "bdev_nvme_attach_controller" 00:17:44.947 }' 00:17:44.947 [2024-07-26 13:28:42.247778] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:44.947 [2024-07-26 13:28:42.247830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932978 ] 00:17:44.947 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.947 [2024-07-26 13:28:42.306901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.947 [2024-07-26 13:28:42.335683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.208 Running I/O for 10 seconds... 00:17:45.783 13:28:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.783 13:28:43 -- common/autotest_common.sh@852 -- # return 0 00:17:45.783 13:28:43 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:45.783 13:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.783 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.783 13:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.783 13:28:43 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.783 13:28:43 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:45.783 13:28:43 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:45.783 13:28:43 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:45.783 13:28:43 -- target/host_management.sh@52 -- # local ret=1 00:17:45.783 13:28:43 -- target/host_management.sh@53 -- # local i 00:17:45.783 13:28:43 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:45.783 13:28:43 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:45.783 13:28:43 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:45.783 13:28:43 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:45.783 13:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.783 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.783 13:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.783 13:28:43 -- target/host_management.sh@55 -- # read_io_count=698 00:17:45.783 13:28:43 -- target/host_management.sh@58 -- # '[' 698 -ge 100 ']' 00:17:45.783 13:28:43 -- target/host_management.sh@59 -- # ret=0 00:17:45.783 13:28:43 -- target/host_management.sh@60 -- # break 00:17:45.783 13:28:43 -- target/host_management.sh@64 -- # return 0 00:17:45.783 13:28:43 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:45.783 13:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.783 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.783 [2024-07-26 13:28:43.081886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.081998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.082241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33220 is same with the state(5) to be set 00:17:45.783 [2024-07-26 13:28:43.083837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.783 [2024-07-26 13:28:43.083874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.783 [2024-07-26 13:28:43.083891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.783 [2024-07-26 13:28:43.083899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.783 [2024-07-26 13:28:43.083909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.783 [2024-07-26 13:28:43.083916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.783 [2024-07-26 13:28:43.083926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.783 [2024-07-26 13:28:43.083933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.783 [2024-07-26 13:28:43.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.783 [2024-07-26 13:28:43.083949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.083958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.083966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.083980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.083988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.083997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.084986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.084996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.085004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.085015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.085023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.085033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.784 [2024-07-26 13:28:43.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.085050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fb290 is same with the state(5) to be set 00:17:45.784 [2024-07-26 13:28:43.085093] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6fb290 was disconnected and freed. reset controller. 00:17:45.784 [2024-07-26 13:28:43.086300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:45.784 13:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.784 13:28:43 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:45.784 13:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.784 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:17:45.784 task offset: 103552 on job bdev=Nvme0n1 fails 00:17:45.784 00:17:45.784 Latency(us) 00:17:45.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.784 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:45.784 Job: Nvme0n1 ended in about 0.45 seconds with error 00:17:45.784 Verification LBA range: start 0x0 length 0x400 00:17:45.784 Nvme0n1 : 0.45 1759.43 109.96 142.90 0.00 33163.96 1897.81 47404.37 00:17:45.784 =================================================================================================================== 00:17:45.784 Total : 1759.43 109.96 142.90 0.00 33163.96 1897.81 47404.37 00:17:45.784 [2024-07-26 13:28:43.088290] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:45.784 [2024-07-26 13:28:43.088315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd6c0 (9): Bad file descriptor 00:17:45.784 [2024-07-26 13:28:43.093511] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:45.784 [2024-07-26 13:28:43.093645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:45.784 [2024-07-26 13:28:43.093668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.784 [2024-07-26 13:28:43.093684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:45.784 [2024-07-26 13:28:43.093692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:45.784 [2024-07-26 13:28:43.093699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:45.784 [2024-07-26 13:28:43.093706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6fd6c0 00:17:45.784 [2024-07-26 13:28:43.093726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd6c0 (9): Bad file descriptor 00:17:45.784 [2024-07-26 13:28:43.093739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:45.784 [2024-07-26 13:28:43.093746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:45.784 [2024-07-26 13:28:43.093755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:45.784 [2024-07-26 13:28:43.093769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:45.784 13:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.784 13:28:43 -- target/host_management.sh@87 -- # sleep 1 00:17:46.728 13:28:44 -- target/host_management.sh@91 -- # kill -9 932978 00:17:46.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (932978) - No such process 00:17:46.728 13:28:44 -- target/host_management.sh@91 -- # true 00:17:46.728 13:28:44 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:46.728 13:28:44 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:46.728 13:28:44 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:46.728 13:28:44 -- nvmf/common.sh@520 -- # config=() 00:17:46.728 13:28:44 -- nvmf/common.sh@520 -- # local subsystem config 00:17:46.728 13:28:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:46.728 13:28:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:46.728 { 00:17:46.728 "params": { 00:17:46.728 "name": "Nvme$subsystem", 00:17:46.728 "trtype": "$TEST_TRANSPORT", 00:17:46.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:46.728 "adrfam": "ipv4", 00:17:46.728 "trsvcid": "$NVMF_PORT", 00:17:46.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:46.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:46.728 "hdgst": ${hdgst:-false}, 00:17:46.728 "ddgst": ${ddgst:-false} 00:17:46.728 }, 00:17:46.728 "method": "bdev_nvme_attach_controller" 00:17:46.728 } 00:17:46.728 EOF 00:17:46.728 )") 00:17:46.728 13:28:44 -- nvmf/common.sh@542 -- # cat 00:17:46.728 13:28:44 -- nvmf/common.sh@544 -- # jq . 00:17:46.728 13:28:44 -- nvmf/common.sh@545 -- # IFS=, 00:17:46.728 13:28:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:46.728 "params": { 00:17:46.728 "name": "Nvme0", 00:17:46.728 "trtype": "tcp", 00:17:46.728 "traddr": "10.0.0.2", 00:17:46.728 "adrfam": "ipv4", 00:17:46.728 "trsvcid": "4420", 00:17:46.728 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:46.728 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:46.728 "hdgst": false, 00:17:46.728 "ddgst": false 00:17:46.728 }, 00:17:46.728 "method": "bdev_nvme_attach_controller" 00:17:46.728 }' 00:17:46.728 [2024-07-26 13:28:44.154874] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:46.728 [2024-07-26 13:28:44.154928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933375 ] 00:17:46.728 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.990 [2024-07-26 13:28:44.212689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.990 [2024-07-26 13:28:44.240544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.990 Running I/O for 1 seconds... 00:17:48.376 00:17:48.376 Latency(us) 00:17:48.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.376 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:48.376 Verification LBA range: start 0x0 length 0x400 00:17:48.376 Nvme0n1 : 1.02 2086.35 130.40 0.00 0.00 30247.26 3549.87 42598.40 00:17:48.376 =================================================================================================================== 00:17:48.376 Total : 2086.35 130.40 0.00 0.00 30247.26 3549.87 42598.40 00:17:48.376 13:28:45 -- target/host_management.sh@101 -- # stoptarget 00:17:48.376 13:28:45 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:48.376 13:28:45 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:48.376 13:28:45 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:48.376 13:28:45 -- target/host_management.sh@40 -- # nvmftestfini 00:17:48.376 13:28:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:48.376 13:28:45 -- nvmf/common.sh@116 -- # sync 00:17:48.376 13:28:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:48.376 13:28:45 -- nvmf/common.sh@119 -- # set +e 00:17:48.376 13:28:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:48.376 13:28:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:48.376 rmmod nvme_tcp 00:17:48.376 rmmod nvme_fabrics 00:17:48.376 rmmod nvme_keyring 00:17:48.376 13:28:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:48.376 13:28:45 -- nvmf/common.sh@123 -- # set -e 00:17:48.376 13:28:45 -- nvmf/common.sh@124 -- # return 0 00:17:48.376 13:28:45 -- nvmf/common.sh@477 -- # '[' -n 932911 ']' 00:17:48.376 13:28:45 -- nvmf/common.sh@478 -- # killprocess 932911 00:17:48.376 13:28:45 -- common/autotest_common.sh@926 -- # '[' -z 932911 ']' 00:17:48.376 13:28:45 -- common/autotest_common.sh@930 -- # kill -0 932911 00:17:48.376 13:28:45 -- common/autotest_common.sh@931 -- # uname 00:17:48.376 13:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.376 13:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 932911 00:17:48.376 13:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:48.376 13:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:48.376 13:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 932911' 00:17:48.376 killing process with pid 932911 00:17:48.376 13:28:45 -- common/autotest_common.sh@945 -- # kill 932911 00:17:48.376 13:28:45 -- common/autotest_common.sh@950 -- # wait 932911 00:17:48.376 [2024-07-26 13:28:45.815957] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:48.376 13:28:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:48.376 13:28:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:48.376 13:28:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:48.376 13:28:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.376 13:28:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:48.376 13:28:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.376 13:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.376 13:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.927 13:28:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:50.927 00:17:50.927 real 0m6.706s 00:17:50.927 user 0m20.039s 00:17:50.927 sys 0m1.091s 00:17:50.927 13:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.927 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:50.927 ************************************ 00:17:50.927 END TEST nvmf_host_management 00:17:50.927 ************************************ 00:17:50.927 13:28:47 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:50.927 00:17:50.927 real 0m13.922s 00:17:50.927 user 0m22.056s 00:17:50.927 sys 0m6.231s 00:17:50.927 13:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.927 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:50.927 ************************************ 00:17:50.927 END TEST nvmf_host_management 00:17:50.927 ************************************ 00:17:50.927 13:28:47 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:50.927 13:28:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:50.927 13:28:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.927 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:17:50.927 ************************************ 00:17:50.927 START TEST nvmf_lvol 00:17:50.927 ************************************ 00:17:50.927 13:28:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:50.927 * Looking for test storage... 00:17:50.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.927 13:28:48 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.927 13:28:48 -- nvmf/common.sh@7 -- # uname -s 00:17:50.927 13:28:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.927 13:28:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.927 13:28:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.927 13:28:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.927 13:28:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.927 13:28:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.927 13:28:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.927 13:28:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.927 13:28:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.927 13:28:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.927 13:28:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.927 13:28:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.927 13:28:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.927 13:28:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.927 13:28:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.927 13:28:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.927 13:28:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.927 13:28:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.928 13:28:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.928 13:28:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.928 13:28:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.928 13:28:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.928 13:28:48 -- paths/export.sh@5 -- # export PATH 00:17:50.928 13:28:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.928 13:28:48 -- nvmf/common.sh@46 -- # : 0 00:17:50.928 13:28:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.928 13:28:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.928 13:28:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.928 13:28:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.928 13:28:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.928 13:28:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.928 13:28:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.928 13:28:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.928 13:28:48 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:50.928 13:28:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.928 13:28:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.928 13:28:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.928 13:28:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.928 13:28:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.928 13:28:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.928 13:28:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.928 13:28:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.928 13:28:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:50.928 13:28:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:50.928 13:28:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:50.928 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:17:57.601 13:28:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:57.601 13:28:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:57.601 13:28:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:57.601 13:28:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:57.601 13:28:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:57.601 13:28:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:57.601 13:28:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:57.601 13:28:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:57.601 13:28:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:57.601 13:28:54 -- nvmf/common.sh@295 -- # e810=() 00:17:57.601 13:28:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:57.601 13:28:54 -- nvmf/common.sh@296 -- # x722=() 00:17:57.601 13:28:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:57.601 13:28:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:57.601 13:28:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:57.601 13:28:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.601 13:28:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:57.601 13:28:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:57.601 13:28:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.601 13:28:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:57.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:57.601 13:28:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.601 13:28:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:57.601 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:57.601 13:28:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.601 13:28:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.601 13:28:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.601 13:28:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:57.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:57.601 13:28:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.601 13:28:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.601 13:28:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.601 13:28:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.601 13:28:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:57.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:57.601 13:28:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.601 13:28:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:57.601 13:28:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:57.601 13:28:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:57.601 13:28:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.601 13:28:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.601 13:28:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.601 13:28:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:57.601 13:28:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.601 13:28:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.601 13:28:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:57.601 13:28:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.601 13:28:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.601 13:28:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:57.602 13:28:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:57.602 13:28:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.602 13:28:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.602 13:28:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.602 13:28:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.602 13:28:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:57.602 13:28:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.602 13:28:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.602 13:28:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.602 13:28:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:57.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:17:57.602 00:17:57.602 --- 10.0.0.2 ping statistics --- 00:17:57.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.602 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:17:57.602 13:28:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:17:57.602 00:17:57.602 --- 10.0.0.1 ping statistics --- 00:17:57.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.602 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:17:57.602 13:28:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.602 13:28:54 -- nvmf/common.sh@410 -- # return 0 00:17:57.602 13:28:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.602 13:28:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.602 13:28:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.602 13:28:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.602 13:28:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.602 13:28:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.602 13:28:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.602 13:28:54 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:57.602 13:28:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.602 13:28:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:57.602 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:57.602 13:28:54 -- nvmf/common.sh@469 -- # nvmfpid=937868 00:17:57.602 13:28:54 -- nvmf/common.sh@470 -- # waitforlisten 937868 00:17:57.602 13:28:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:57.602 13:28:54 -- common/autotest_common.sh@819 -- # '[' -z 937868 ']' 00:17:57.602 13:28:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.602 13:28:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.602 13:28:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.602 13:28:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.602 13:28:54 -- common/autotest_common.sh@10 -- # set +x 00:17:57.602 [2024-07-26 13:28:54.969503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:57.602 [2024-07-26 13:28:54.969569] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.602 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.602 [2024-07-26 13:28:55.040670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.863 [2024-07-26 13:28:55.078098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.863 [2024-07-26 13:28:55.078246] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.863 [2024-07-26 13:28:55.078255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.863 [2024-07-26 13:28:55.078263] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.863 [2024-07-26 13:28:55.078348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.863 [2024-07-26 13:28:55.078468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.863 [2024-07-26 13:28:55.078470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.436 13:28:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.436 13:28:55 -- common/autotest_common.sh@852 -- # return 0 00:17:58.436 13:28:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.436 13:28:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:58.436 13:28:55 -- common/autotest_common.sh@10 -- # set +x 00:17:58.436 13:28:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.436 13:28:55 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.698 [2024-07-26 13:28:55.914032] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.698 13:28:55 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.698 13:28:56 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:58.698 13:28:56 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:58.959 13:28:56 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:58.959 13:28:56 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:59.221 13:28:56 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:59.221 13:28:56 -- target/nvmf_lvol.sh@29 -- # lvs=24ebc4ce-c449-4930-9f76-c412ab9084d5 00:17:59.221 13:28:56 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 24ebc4ce-c449-4930-9f76-c412ab9084d5 lvol 20 00:17:59.482 13:28:56 -- target/nvmf_lvol.sh@32 -- # lvol=0be12d2a-2aae-43c1-82ad-27d5263722c6 00:17:59.482 13:28:56 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:59.482 13:28:56 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0be12d2a-2aae-43c1-82ad-27d5263722c6 00:17:59.742 13:28:57 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:59.742 [2024-07-26 13:28:57.187128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.003 13:28:57 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.003 13:28:57 -- target/nvmf_lvol.sh@42 -- # perf_pid=938407 00:18:00.003 13:28:57 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:00.003 13:28:57 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:00.003 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.946 13:28:58 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0be12d2a-2aae-43c1-82ad-27d5263722c6 MY_SNAPSHOT 00:18:01.217 13:28:58 -- target/nvmf_lvol.sh@47 -- # snapshot=45faced9-7f26-473d-b1d2-96801b729f11 00:18:01.217 13:28:58 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0be12d2a-2aae-43c1-82ad-27d5263722c6 30 00:18:01.484 13:28:58 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 45faced9-7f26-473d-b1d2-96801b729f11 MY_CLONE 00:18:01.484 13:28:58 -- target/nvmf_lvol.sh@49 -- # clone=ad651819-aeda-42d1-9840-3af71d86df75 00:18:01.745 13:28:58 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ad651819-aeda-42d1-9840-3af71d86df75 00:18:02.006 13:28:59 -- target/nvmf_lvol.sh@53 -- # wait 938407 00:18:12.076 Initializing NVMe Controllers 00:18:12.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:12.076 Controller IO queue size 128, less than required. 00:18:12.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:12.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:12.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:12.076 Initialization complete. Launching workers. 00:18:12.076 ======================================================== 00:18:12.076 Latency(us) 00:18:12.076 Device Information : IOPS MiB/s Average min max 00:18:12.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18206.80 71.12 7032.43 1182.37 49876.06 00:18:12.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12555.70 49.05 10184.65 3727.44 57146.27 00:18:12.076 ======================================================== 00:18:12.076 Total : 30762.50 120.17 8319.00 1182.37 57146.27 00:18:12.076 00:18:12.076 13:29:07 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:12.076 13:29:07 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0be12d2a-2aae-43c1-82ad-27d5263722c6 00:18:12.076 13:29:08 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24ebc4ce-c449-4930-9f76-c412ab9084d5 00:18:12.076 13:29:08 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:12.076 13:29:08 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:12.076 13:29:08 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:12.076 13:29:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:12.076 13:29:08 -- nvmf/common.sh@116 -- # sync 00:18:12.076 13:29:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:12.076 13:29:08 -- nvmf/common.sh@119 -- # set +e 00:18:12.076 13:29:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:12.076 13:29:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:12.076 rmmod nvme_tcp 00:18:12.076 rmmod nvme_fabrics 00:18:12.076 rmmod nvme_keyring 00:18:12.076 13:29:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:12.076 13:29:08 -- nvmf/common.sh@123 -- # set -e 00:18:12.076 13:29:08 -- nvmf/common.sh@124 -- # return 0 00:18:12.076 13:29:08 -- nvmf/common.sh@477 -- # '[' -n 937868 ']' 00:18:12.076 13:29:08 -- nvmf/common.sh@478 -- # killprocess 937868 00:18:12.076 13:29:08 -- common/autotest_common.sh@926 -- # '[' -z 937868 ']' 00:18:12.076 13:29:08 -- common/autotest_common.sh@930 -- # kill -0 937868 00:18:12.076 13:29:08 -- common/autotest_common.sh@931 -- # uname 00:18:12.076 13:29:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:12.076 13:29:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 937868 00:18:12.076 13:29:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:12.076 13:29:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:12.076 13:29:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 937868' 00:18:12.076 killing process with pid 937868 00:18:12.076 13:29:08 -- common/autotest_common.sh@945 -- # kill 937868 00:18:12.076 13:29:08 -- common/autotest_common.sh@950 -- # wait 937868 00:18:12.076 13:29:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:12.076 13:29:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:12.076 13:29:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:12.076 13:29:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.076 13:29:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:12.076 13:29:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.076 13:29:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.076 13:29:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.463 13:29:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:13.463 00:18:13.463 real 0m22.612s 00:18:13.463 user 1m0.198s 00:18:13.463 sys 0m8.756s 00:18:13.463 13:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.463 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 ************************************ 00:18:13.463 END TEST nvmf_lvol 00:18:13.463 ************************************ 00:18:13.463 13:29:10 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:13.463 13:29:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:13.463 13:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:13.463 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 ************************************ 00:18:13.463 START TEST nvmf_lvs_grow 00:18:13.463 ************************************ 00:18:13.463 13:29:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:13.463 * Looking for test storage... 00:18:13.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:13.463 13:29:10 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.463 13:29:10 -- nvmf/common.sh@7 -- # uname -s 00:18:13.463 13:29:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.463 13:29:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.463 13:29:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.464 13:29:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.464 13:29:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.464 13:29:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.464 13:29:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.464 13:29:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.464 13:29:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.464 13:29:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.464 13:29:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.464 13:29:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.464 13:29:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.464 13:29:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.464 13:29:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.464 13:29:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.464 13:29:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.464 13:29:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.464 13:29:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.464 13:29:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.464 13:29:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.464 13:29:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.464 13:29:10 -- paths/export.sh@5 -- # export PATH 00:18:13.464 13:29:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.464 13:29:10 -- nvmf/common.sh@46 -- # : 0 00:18:13.464 13:29:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.464 13:29:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.464 13:29:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.464 13:29:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.464 13:29:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.464 13:29:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.464 13:29:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.464 13:29:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.464 13:29:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.464 13:29:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.464 13:29:10 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:13.464 13:29:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.464 13:29:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.464 13:29:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.464 13:29:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.464 13:29:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.464 13:29:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.464 13:29:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.464 13:29:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.464 13:29:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:13.464 13:29:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:13.464 13:29:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:13.464 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:18:21.611 13:29:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.611 13:29:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:21.611 13:29:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:21.611 13:29:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:21.611 13:29:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:21.611 13:29:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:21.611 13:29:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:21.611 13:29:17 -- nvmf/common.sh@294 -- # net_devs=() 00:18:21.611 13:29:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:21.611 13:29:17 -- nvmf/common.sh@295 -- # e810=() 00:18:21.611 13:29:17 -- nvmf/common.sh@295 -- # local -ga e810 00:18:21.611 13:29:17 -- nvmf/common.sh@296 -- # x722=() 00:18:21.611 13:29:17 -- nvmf/common.sh@296 -- # local -ga x722 00:18:21.611 13:29:17 -- nvmf/common.sh@297 -- # mlx=() 00:18:21.611 13:29:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:21.611 13:29:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.611 13:29:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:21.611 13:29:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:21.611 13:29:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:21.611 13:29:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.611 13:29:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:21.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:21.611 13:29:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:21.611 13:29:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:21.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:21.611 13:29:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:21.611 13:29:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.611 13:29:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.611 13:29:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.611 13:29:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.611 13:29:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:21.611 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:21.611 13:29:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.611 13:29:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:21.611 13:29:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.611 13:29:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:21.611 13:29:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.611 13:29:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:21.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:21.611 13:29:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.611 13:29:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:21.611 13:29:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:21.611 13:29:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:21.611 13:29:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:21.611 13:29:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.611 13:29:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.611 13:29:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.612 13:29:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:21.612 13:29:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.612 13:29:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.612 13:29:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:21.612 13:29:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.612 13:29:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.612 13:29:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:21.612 13:29:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:21.612 13:29:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.612 13:29:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.612 13:29:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.612 13:29:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.612 13:29:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:21.612 13:29:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.612 13:29:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.612 13:29:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.612 13:29:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:21.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:18:21.612 00:18:21.612 --- 10.0.0.2 ping statistics --- 00:18:21.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.612 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:18:21.612 13:29:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:18:21.612 00:18:21.612 --- 10.0.0.1 ping statistics --- 00:18:21.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.612 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:18:21.612 13:29:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.612 13:29:17 -- nvmf/common.sh@410 -- # return 0 00:18:21.612 13:29:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:21.612 13:29:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.612 13:29:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:21.612 13:29:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:21.612 13:29:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.612 13:29:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:21.612 13:29:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:21.612 13:29:17 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:21.612 13:29:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:21.612 13:29:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:21.612 13:29:17 -- common/autotest_common.sh@10 -- # set +x 00:18:21.612 13:29:17 -- nvmf/common.sh@469 -- # nvmfpid=944778 00:18:21.612 13:29:17 -- nvmf/common.sh@470 -- # waitforlisten 944778 00:18:21.612 13:29:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:21.612 13:29:17 -- common/autotest_common.sh@819 -- # '[' -z 944778 ']' 00:18:21.612 13:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.612 13:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:21.612 13:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.612 13:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:21.612 13:29:17 -- common/autotest_common.sh@10 -- # set +x 00:18:21.612 [2024-07-26 13:29:18.042425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:21.612 [2024-07-26 13:29:18.042490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.612 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.612 [2024-07-26 13:29:18.113098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.612 [2024-07-26 13:29:18.149133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:21.612 [2024-07-26 13:29:18.149302] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.612 [2024-07-26 13:29:18.149313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.612 [2024-07-26 13:29:18.149321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.612 [2024-07-26 13:29:18.149342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.612 13:29:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:21.612 13:29:18 -- common/autotest_common.sh@852 -- # return 0 00:18:21.612 13:29:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:21.612 13:29:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:21.612 13:29:18 -- common/autotest_common.sh@10 -- # set +x 00:18:21.612 13:29:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:21.612 [2024-07-26 13:29:18.975757] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:21.612 13:29:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:21.612 13:29:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:21.612 13:29:18 -- common/autotest_common.sh@10 -- # set +x 00:18:21.612 ************************************ 00:18:21.612 START TEST lvs_grow_clean 00:18:21.612 ************************************ 00:18:21.612 13:29:18 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:21.612 13:29:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:21.612 13:29:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:21.612 13:29:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:21.612 13:29:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:21.612 13:29:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:21.612 13:29:19 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:21.874 13:29:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:21.874 13:29:19 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c445e853-a084-484e-8bcf-06c4557114fc 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:22.135 13:29:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c445e853-a084-484e-8bcf-06c4557114fc lvol 150 00:18:22.397 13:29:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d60709a9-a8db-4131-89da-c909a49e094b 00:18:22.397 13:29:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:22.397 13:29:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:22.397 [2024-07-26 13:29:19.801234] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:22.397 [2024-07-26 13:29:19.801284] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:22.397 true 00:18:22.397 13:29:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:22.397 13:29:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:22.658 13:29:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:22.658 13:29:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:22.658 13:29:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d60709a9-a8db-4131-89da-c909a49e094b 00:18:22.920 13:29:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:23.181 [2024-07-26 13:29:20.403099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.181 13:29:20 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:23.181 13:29:20 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=945177 00:18:23.181 13:29:20 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.181 13:29:20 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:23.181 13:29:20 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 945177 /var/tmp/bdevperf.sock 00:18:23.181 13:29:20 -- common/autotest_common.sh@819 -- # '[' -z 945177 ']' 00:18:23.181 13:29:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.181 13:29:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.181 13:29:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.182 13:29:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.182 13:29:20 -- common/autotest_common.sh@10 -- # set +x 00:18:23.182 [2024-07-26 13:29:20.610155] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:23.182 [2024-07-26 13:29:20.610212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945177 ] 00:18:23.182 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.443 [2024-07-26 13:29:20.686019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.443 [2024-07-26 13:29:20.715912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.016 13:29:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:24.016 13:29:21 -- common/autotest_common.sh@852 -- # return 0 00:18:24.016 13:29:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:24.277 Nvme0n1 00:18:24.277 13:29:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:24.539 [ 00:18:24.539 { 00:18:24.539 "name": "Nvme0n1", 00:18:24.539 "aliases": [ 00:18:24.539 "d60709a9-a8db-4131-89da-c909a49e094b" 00:18:24.539 ], 00:18:24.539 "product_name": "NVMe disk", 00:18:24.539 "block_size": 4096, 00:18:24.539 "num_blocks": 38912, 00:18:24.539 "uuid": "d60709a9-a8db-4131-89da-c909a49e094b", 00:18:24.539 "assigned_rate_limits": { 00:18:24.539 "rw_ios_per_sec": 0, 00:18:24.539 "rw_mbytes_per_sec": 0, 00:18:24.539 "r_mbytes_per_sec": 0, 00:18:24.539 "w_mbytes_per_sec": 0 00:18:24.539 }, 00:18:24.539 "claimed": false, 00:18:24.539 "zoned": false, 00:18:24.539 "supported_io_types": { 00:18:24.539 "read": true, 00:18:24.539 "write": true, 00:18:24.539 "unmap": true, 00:18:24.539 "write_zeroes": true, 00:18:24.539 "flush": true, 00:18:24.539 "reset": true, 00:18:24.539 "compare": true, 00:18:24.539 "compare_and_write": true, 00:18:24.539 "abort": true, 00:18:24.539 "nvme_admin": true, 00:18:24.539 "nvme_io": true 00:18:24.539 }, 00:18:24.539 "driver_specific": { 00:18:24.539 "nvme": [ 00:18:24.539 { 00:18:24.539 "trid": { 00:18:24.539 "trtype": "TCP", 00:18:24.539 "adrfam": "IPv4", 00:18:24.539 "traddr": "10.0.0.2", 00:18:24.539 "trsvcid": "4420", 00:18:24.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:24.539 }, 00:18:24.539 "ctrlr_data": { 00:18:24.539 "cntlid": 1, 00:18:24.539 "vendor_id": "0x8086", 00:18:24.539 "model_number": "SPDK bdev Controller", 00:18:24.539 "serial_number": "SPDK0", 00:18:24.539 "firmware_revision": "24.01.1", 00:18:24.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:24.539 "oacs": { 00:18:24.539 "security": 0, 00:18:24.539 "format": 0, 00:18:24.539 "firmware": 0, 00:18:24.539 "ns_manage": 0 00:18:24.539 }, 00:18:24.539 "multi_ctrlr": true, 00:18:24.539 "ana_reporting": false 00:18:24.539 }, 00:18:24.539 "vs": { 00:18:24.539 "nvme_version": "1.3" 00:18:24.539 }, 00:18:24.539 "ns_data": { 00:18:24.539 "id": 1, 00:18:24.539 "can_share": true 00:18:24.539 } 00:18:24.539 } 00:18:24.539 ], 00:18:24.539 "mp_policy": "active_passive" 00:18:24.539 } 00:18:24.539 } 00:18:24.539 ] 00:18:24.539 13:29:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=945513 00:18:24.539 13:29:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:24.539 13:29:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:24.539 Running I/O for 10 seconds... 00:18:25.502 Latency(us) 00:18:25.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.502 Nvme0n1 : 1.00 17948.00 70.11 0.00 0.00 0.00 0.00 0.00 00:18:25.502 =================================================================================================================== 00:18:25.502 Total : 17948.00 70.11 0.00 0.00 0.00 0.00 0.00 00:18:25.502 00:18:26.445 13:29:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:26.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.706 Nvme0n1 : 2.00 18150.00 70.90 0.00 0.00 0.00 0.00 0.00 00:18:26.706 =================================================================================================================== 00:18:26.706 Total : 18150.00 70.90 0.00 0.00 0.00 0.00 0.00 00:18:26.706 00:18:26.706 true 00:18:26.706 13:29:24 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:26.706 13:29:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:27.016 13:29:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:27.016 13:29:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:27.016 13:29:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 945513 00:18:27.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.598 Nvme0n1 : 3.00 18220.00 71.17 0.00 0.00 0.00 0.00 0.00 00:18:27.598 =================================================================================================================== 00:18:27.598 Total : 18220.00 71.17 0.00 0.00 0.00 0.00 0.00 00:18:27.598 00:18:28.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.539 Nvme0n1 : 4.00 18267.00 71.36 0.00 0.00 0.00 0.00 0.00 00:18:28.539 =================================================================================================================== 00:18:28.539 Total : 18267.00 71.36 0.00 0.00 0.00 0.00 0.00 00:18:28.539 00:18:29.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.530 Nvme0n1 : 5.00 18298.40 71.48 0.00 0.00 0.00 0.00 0.00 00:18:29.530 =================================================================================================================== 00:18:29.530 Total : 18298.40 71.48 0.00 0.00 0.00 0.00 0.00 00:18:29.530 00:18:30.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.918 Nvme0n1 : 6.00 18327.33 71.59 0.00 0.00 0.00 0.00 0.00 00:18:30.918 =================================================================================================================== 00:18:30.918 Total : 18327.33 71.59 0.00 0.00 0.00 0.00 0.00 00:18:30.918 00:18:31.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.860 Nvme0n1 : 7.00 18348.00 71.67 0.00 0.00 0.00 0.00 0.00 00:18:31.860 =================================================================================================================== 00:18:31.860 Total : 18348.00 71.67 0.00 0.00 0.00 0.00 0.00 00:18:31.860 00:18:32.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.804 Nvme0n1 : 8.00 18357.50 71.71 0.00 0.00 0.00 0.00 0.00 00:18:32.804 =================================================================================================================== 00:18:32.804 Total : 18357.50 71.71 0.00 0.00 0.00 0.00 0.00 00:18:32.804 00:18:33.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.747 Nvme0n1 : 9.00 18366.67 71.74 0.00 0.00 0.00 0.00 0.00 00:18:33.747 =================================================================================================================== 00:18:33.747 Total : 18366.67 71.74 0.00 0.00 0.00 0.00 0.00 00:18:33.747 00:18:34.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.690 Nvme0n1 : 10.00 18373.20 71.77 0.00 0.00 0.00 0.00 0.00 00:18:34.690 =================================================================================================================== 00:18:34.690 Total : 18373.20 71.77 0.00 0.00 0.00 0.00 0.00 00:18:34.690 00:18:34.690 00:18:34.690 Latency(us) 00:18:34.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.690 Nvme0n1 : 10.01 18373.28 71.77 0.00 0.00 6962.43 4997.12 22719.15 00:18:34.690 =================================================================================================================== 00:18:34.690 Total : 18373.28 71.77 0.00 0.00 6962.43 4997.12 22719.15 00:18:34.690 0 00:18:34.690 13:29:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 945177 00:18:34.690 13:29:32 -- common/autotest_common.sh@926 -- # '[' -z 945177 ']' 00:18:34.690 13:29:32 -- common/autotest_common.sh@930 -- # kill -0 945177 00:18:34.690 13:29:32 -- common/autotest_common.sh@931 -- # uname 00:18:34.690 13:29:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:34.690 13:29:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 945177 00:18:34.690 13:29:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:34.690 13:29:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:34.690 13:29:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 945177' 00:18:34.690 killing process with pid 945177 00:18:34.690 13:29:32 -- common/autotest_common.sh@945 -- # kill 945177 00:18:34.690 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.690 00:18:34.690 Latency(us) 00:18:34.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.690 =================================================================================================================== 00:18:34.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.690 13:29:32 -- common/autotest_common.sh@950 -- # wait 945177 00:18:34.952 13:29:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:34.952 13:29:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:34.952 13:29:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:35.213 13:29:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:35.213 13:29:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:35.213 13:29:32 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:35.213 [2024-07-26 13:29:32.663727] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:35.474 13:29:32 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:35.474 13:29:32 -- common/autotest_common.sh@640 -- # local es=0 00:18:35.474 13:29:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:35.474 13:29:32 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.474 13:29:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:35.474 13:29:32 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.474 13:29:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:35.474 13:29:32 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.474 13:29:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:35.474 13:29:32 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.474 13:29:32 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:35.474 13:29:32 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:35.474 request: 00:18:35.474 { 00:18:35.474 "uuid": "c445e853-a084-484e-8bcf-06c4557114fc", 00:18:35.474 "method": "bdev_lvol_get_lvstores", 00:18:35.474 "req_id": 1 00:18:35.474 } 00:18:35.474 Got JSON-RPC error response 00:18:35.475 response: 00:18:35.475 { 00:18:35.475 "code": -19, 00:18:35.475 "message": "No such device" 00:18:35.475 } 00:18:35.475 13:29:32 -- common/autotest_common.sh@643 -- # es=1 00:18:35.475 13:29:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:35.475 13:29:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:35.475 13:29:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:35.475 13:29:32 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:35.736 aio_bdev 00:18:35.736 13:29:33 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d60709a9-a8db-4131-89da-c909a49e094b 00:18:35.736 13:29:33 -- common/autotest_common.sh@887 -- # local bdev_name=d60709a9-a8db-4131-89da-c909a49e094b 00:18:35.736 13:29:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:35.736 13:29:33 -- common/autotest_common.sh@889 -- # local i 00:18:35.736 13:29:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:35.736 13:29:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:35.736 13:29:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:35.736 13:29:33 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d60709a9-a8db-4131-89da-c909a49e094b -t 2000 00:18:35.998 [ 00:18:35.998 { 00:18:35.998 "name": "d60709a9-a8db-4131-89da-c909a49e094b", 00:18:35.998 "aliases": [ 00:18:35.998 "lvs/lvol" 00:18:35.998 ], 00:18:35.998 "product_name": "Logical Volume", 00:18:35.998 "block_size": 4096, 00:18:35.998 "num_blocks": 38912, 00:18:35.998 "uuid": "d60709a9-a8db-4131-89da-c909a49e094b", 00:18:35.998 "assigned_rate_limits": { 00:18:35.998 "rw_ios_per_sec": 0, 00:18:35.998 "rw_mbytes_per_sec": 0, 00:18:35.998 "r_mbytes_per_sec": 0, 00:18:35.998 "w_mbytes_per_sec": 0 00:18:35.998 }, 00:18:35.998 "claimed": false, 00:18:35.998 "zoned": false, 00:18:35.998 "supported_io_types": { 00:18:35.998 "read": true, 00:18:35.998 "write": true, 00:18:35.998 "unmap": true, 00:18:35.998 "write_zeroes": true, 00:18:35.998 "flush": false, 00:18:35.998 "reset": true, 00:18:35.998 "compare": false, 00:18:35.998 "compare_and_write": false, 00:18:35.998 "abort": false, 00:18:35.998 "nvme_admin": false, 00:18:35.998 "nvme_io": false 00:18:35.998 }, 00:18:35.998 "driver_specific": { 00:18:35.998 "lvol": { 00:18:35.998 "lvol_store_uuid": "c445e853-a084-484e-8bcf-06c4557114fc", 00:18:35.998 "base_bdev": "aio_bdev", 00:18:35.998 "thin_provision": false, 00:18:35.998 "snapshot": false, 00:18:35.998 "clone": false, 00:18:35.998 "esnap_clone": false 00:18:35.998 } 00:18:35.998 } 00:18:35.998 } 00:18:35.998 ] 00:18:35.998 13:29:33 -- common/autotest_common.sh@895 -- # return 0 00:18:35.998 13:29:33 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:35.998 13:29:33 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:35.998 13:29:33 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:35.998 13:29:33 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:35.998 13:29:33 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:36.259 13:29:33 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:36.259 13:29:33 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d60709a9-a8db-4131-89da-c909a49e094b 00:18:36.259 13:29:33 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c445e853-a084-484e-8bcf-06c4557114fc 00:18:36.520 13:29:33 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:36.782 00:18:36.782 real 0m15.067s 00:18:36.782 user 0m14.738s 00:18:36.782 sys 0m1.326s 00:18:36.782 13:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.782 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:18:36.782 ************************************ 00:18:36.782 END TEST lvs_grow_clean 00:18:36.782 ************************************ 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:36.782 13:29:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:36.782 13:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:36.782 13:29:34 -- common/autotest_common.sh@10 -- # set +x 00:18:36.782 ************************************ 00:18:36.782 START TEST lvs_grow_dirty 00:18:36.782 ************************************ 00:18:36.782 13:29:34 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:36.782 13:29:34 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:37.043 13:29:34 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:37.043 13:29:34 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:37.043 13:29:34 -- target/nvmf_lvs_grow.sh@28 -- # lvs=edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:37.043 13:29:34 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:37.043 13:29:34 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u edd42c08-49b9-46ae-a2bc-a696f18c5775 lvol 150 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:37.304 13:29:34 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:37.566 [2024-07-26 13:29:34.892237] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:37.566 [2024-07-26 13:29:34.892289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:37.566 true 00:18:37.566 13:29:34 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:37.566 13:29:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:37.827 13:29:35 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:37.827 13:29:35 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:37.827 13:29:35 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:38.088 13:29:35 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:38.089 13:29:35 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:38.350 13:29:35 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:38.350 13:29:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=948277 00:18:38.350 13:29:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:38.350 13:29:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 948277 /var/tmp/bdevperf.sock 00:18:38.350 13:29:35 -- common/autotest_common.sh@819 -- # '[' -z 948277 ']' 00:18:38.350 13:29:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.350 13:29:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:38.350 13:29:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.350 13:29:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:38.350 13:29:35 -- common/autotest_common.sh@10 -- # set +x 00:18:38.350 [2024-07-26 13:29:35.675674] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:38.350 [2024-07-26 13:29:35.675725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948277 ] 00:18:38.350 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.350 [2024-07-26 13:29:35.748521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.350 [2024-07-26 13:29:35.775179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.294 13:29:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:39.294 13:29:36 -- common/autotest_common.sh@852 -- # return 0 00:18:39.294 13:29:36 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:39.555 Nvme0n1 00:18:39.555 13:29:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:39.555 [ 00:18:39.555 { 00:18:39.555 "name": "Nvme0n1", 00:18:39.555 "aliases": [ 00:18:39.555 "4af6de93-7677-427b-a96b-3a36e1fd5014" 00:18:39.555 ], 00:18:39.555 "product_name": "NVMe disk", 00:18:39.555 "block_size": 4096, 00:18:39.555 "num_blocks": 38912, 00:18:39.555 "uuid": "4af6de93-7677-427b-a96b-3a36e1fd5014", 00:18:39.555 "assigned_rate_limits": { 00:18:39.555 "rw_ios_per_sec": 0, 00:18:39.555 "rw_mbytes_per_sec": 0, 00:18:39.555 "r_mbytes_per_sec": 0, 00:18:39.555 "w_mbytes_per_sec": 0 00:18:39.555 }, 00:18:39.555 "claimed": false, 00:18:39.555 "zoned": false, 00:18:39.555 "supported_io_types": { 00:18:39.555 "read": true, 00:18:39.555 "write": true, 00:18:39.555 "unmap": true, 00:18:39.555 "write_zeroes": true, 00:18:39.555 "flush": true, 00:18:39.555 "reset": true, 00:18:39.555 "compare": true, 00:18:39.555 "compare_and_write": true, 00:18:39.555 "abort": true, 00:18:39.555 "nvme_admin": true, 00:18:39.555 "nvme_io": true 00:18:39.555 }, 00:18:39.555 "driver_specific": { 00:18:39.555 "nvme": [ 00:18:39.555 { 00:18:39.555 "trid": { 00:18:39.555 "trtype": "TCP", 00:18:39.555 "adrfam": "IPv4", 00:18:39.555 "traddr": "10.0.0.2", 00:18:39.555 "trsvcid": "4420", 00:18:39.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:39.555 }, 00:18:39.555 "ctrlr_data": { 00:18:39.555 "cntlid": 1, 00:18:39.555 "vendor_id": "0x8086", 00:18:39.555 "model_number": "SPDK bdev Controller", 00:18:39.555 "serial_number": "SPDK0", 00:18:39.555 "firmware_revision": "24.01.1", 00:18:39.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.555 "oacs": { 00:18:39.555 "security": 0, 00:18:39.555 "format": 0, 00:18:39.555 "firmware": 0, 00:18:39.555 "ns_manage": 0 00:18:39.555 }, 00:18:39.555 "multi_ctrlr": true, 00:18:39.555 "ana_reporting": false 00:18:39.555 }, 00:18:39.555 "vs": { 00:18:39.555 "nvme_version": "1.3" 00:18:39.555 }, 00:18:39.555 "ns_data": { 00:18:39.555 "id": 1, 00:18:39.555 "can_share": true 00:18:39.555 } 00:18:39.555 } 00:18:39.555 ], 00:18:39.555 "mp_policy": "active_passive" 00:18:39.555 } 00:18:39.555 } 00:18:39.555 ] 00:18:39.555 13:29:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=948620 00:18:39.555 13:29:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:39.555 13:29:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.816 Running I/O for 10 seconds... 00:18:40.760 Latency(us) 00:18:40.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.761 Nvme0n1 : 1.00 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:18:40.761 =================================================================================================================== 00:18:40.761 Total : 17865.00 69.79 0.00 0.00 0.00 0.00 0.00 00:18:40.761 00:18:41.704 13:29:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:41.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.704 Nvme0n1 : 2.00 18056.50 70.53 0.00 0.00 0.00 0.00 0.00 00:18:41.704 =================================================================================================================== 00:18:41.704 Total : 18056.50 70.53 0.00 0.00 0.00 0.00 0.00 00:18:41.704 00:18:41.704 true 00:18:41.704 13:29:39 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:41.704 13:29:39 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:41.965 13:29:39 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:41.965 13:29:39 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:41.965 13:29:39 -- target/nvmf_lvs_grow.sh@65 -- # wait 948620 00:18:42.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.910 Nvme0n1 : 3.00 18128.33 70.81 0.00 0.00 0.00 0.00 0.00 00:18:42.910 =================================================================================================================== 00:18:42.910 Total : 18128.33 70.81 0.00 0.00 0.00 0.00 0.00 00:18:42.910 00:18:43.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.854 Nvme0n1 : 4.00 18178.25 71.01 0.00 0.00 0.00 0.00 0.00 00:18:43.854 =================================================================================================================== 00:18:43.854 Total : 18178.25 71.01 0.00 0.00 0.00 0.00 0.00 00:18:43.854 00:18:44.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.797 Nvme0n1 : 5.00 18213.00 71.14 0.00 0.00 0.00 0.00 0.00 00:18:44.797 =================================================================================================================== 00:18:44.797 Total : 18213.00 71.14 0.00 0.00 0.00 0.00 0.00 00:18:44.797 00:18:45.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.741 Nvme0n1 : 6.00 18242.83 71.26 0.00 0.00 0.00 0.00 0.00 00:18:45.741 =================================================================================================================== 00:18:45.741 Total : 18242.83 71.26 0.00 0.00 0.00 0.00 0.00 00:18:45.741 00:18:46.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.716 Nvme0n1 : 7.00 18263.00 71.34 0.00 0.00 0.00 0.00 0.00 00:18:46.716 =================================================================================================================== 00:18:46.716 Total : 18263.00 71.34 0.00 0.00 0.00 0.00 0.00 00:18:46.716 00:18:47.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.661 Nvme0n1 : 8.00 18282.12 71.41 0.00 0.00 0.00 0.00 0.00 00:18:47.661 =================================================================================================================== 00:18:47.661 Total : 18282.12 71.41 0.00 0.00 0.00 0.00 0.00 00:18:47.661 00:18:48.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.606 Nvme0n1 : 9.00 18297.00 71.47 0.00 0.00 0.00 0.00 0.00 00:18:48.606 =================================================================================================================== 00:18:48.606 Total : 18297.00 71.47 0.00 0.00 0.00 0.00 0.00 00:18:48.606 00:18:49.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.993 Nvme0n1 : 10.00 18309.70 71.52 0.00 0.00 0.00 0.00 0.00 00:18:49.993 =================================================================================================================== 00:18:49.993 Total : 18309.70 71.52 0.00 0.00 0.00 0.00 0.00 00:18:49.993 00:18:49.993 00:18:49.993 Latency(us) 00:18:49.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.993 Nvme0n1 : 10.01 18309.84 71.52 0.00 0.00 6986.97 5079.04 23811.41 00:18:49.993 =================================================================================================================== 00:18:49.993 Total : 18309.84 71.52 0.00 0.00 6986.97 5079.04 23811.41 00:18:49.993 0 00:18:49.993 13:29:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 948277 00:18:49.993 13:29:47 -- common/autotest_common.sh@926 -- # '[' -z 948277 ']' 00:18:49.993 13:29:47 -- common/autotest_common.sh@930 -- # kill -0 948277 00:18:49.993 13:29:47 -- common/autotest_common.sh@931 -- # uname 00:18:49.993 13:29:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:49.993 13:29:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 948277 00:18:49.993 13:29:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:49.993 13:29:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:49.993 13:29:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 948277' 00:18:49.993 killing process with pid 948277 00:18:49.993 13:29:47 -- common/autotest_common.sh@945 -- # kill 948277 00:18:49.993 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.993 00:18:49.993 Latency(us) 00:18:49.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.993 =================================================================================================================== 00:18:49.993 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.993 13:29:47 -- common/autotest_common.sh@950 -- # wait 948277 00:18:49.993 13:29:47 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:49.993 13:29:47 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:49.993 13:29:47 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 944778 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@74 -- # wait 944778 00:18:50.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 944778 Killed "${NVMF_APP[@]}" "$@" 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:50.255 13:29:47 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:50.255 13:29:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:50.255 13:29:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:50.255 13:29:47 -- common/autotest_common.sh@10 -- # set +x 00:18:50.255 13:29:47 -- nvmf/common.sh@469 -- # nvmfpid=950664 00:18:50.255 13:29:47 -- nvmf/common.sh@470 -- # waitforlisten 950664 00:18:50.255 13:29:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:50.255 13:29:47 -- common/autotest_common.sh@819 -- # '[' -z 950664 ']' 00:18:50.255 13:29:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.255 13:29:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:50.255 13:29:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.255 13:29:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:50.255 13:29:47 -- common/autotest_common.sh@10 -- # set +x 00:18:50.255 [2024-07-26 13:29:47.663290] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:50.255 [2024-07-26 13:29:47.663344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.255 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.516 [2024-07-26 13:29:47.728585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.516 [2024-07-26 13:29:47.758576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:50.516 [2024-07-26 13:29:47.758695] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.516 [2024-07-26 13:29:47.758704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.516 [2024-07-26 13:29:47.758711] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.516 [2024-07-26 13:29:47.758735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.088 13:29:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:51.088 13:29:48 -- common/autotest_common.sh@852 -- # return 0 00:18:51.088 13:29:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:51.088 13:29:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:51.088 13:29:48 -- common/autotest_common.sh@10 -- # set +x 00:18:51.088 13:29:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.088 13:29:48 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:51.349 [2024-07-26 13:29:48.597960] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:51.349 [2024-07-26 13:29:48.598051] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:51.349 [2024-07-26 13:29:48.598080] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:51.349 13:29:48 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:51.349 13:29:48 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:51.349 13:29:48 -- common/autotest_common.sh@887 -- # local bdev_name=4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:51.349 13:29:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:51.349 13:29:48 -- common/autotest_common.sh@889 -- # local i 00:18:51.349 13:29:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:51.349 13:29:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:51.349 13:29:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:51.349 13:29:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4af6de93-7677-427b-a96b-3a36e1fd5014 -t 2000 00:18:51.610 [ 00:18:51.610 { 00:18:51.610 "name": "4af6de93-7677-427b-a96b-3a36e1fd5014", 00:18:51.610 "aliases": [ 00:18:51.610 "lvs/lvol" 00:18:51.610 ], 00:18:51.610 "product_name": "Logical Volume", 00:18:51.610 "block_size": 4096, 00:18:51.610 "num_blocks": 38912, 00:18:51.610 "uuid": "4af6de93-7677-427b-a96b-3a36e1fd5014", 00:18:51.610 "assigned_rate_limits": { 00:18:51.610 "rw_ios_per_sec": 0, 00:18:51.610 "rw_mbytes_per_sec": 0, 00:18:51.610 "r_mbytes_per_sec": 0, 00:18:51.610 "w_mbytes_per_sec": 0 00:18:51.610 }, 00:18:51.610 "claimed": false, 00:18:51.610 "zoned": false, 00:18:51.610 "supported_io_types": { 00:18:51.610 "read": true, 00:18:51.610 "write": true, 00:18:51.610 "unmap": true, 00:18:51.610 "write_zeroes": true, 00:18:51.610 "flush": false, 00:18:51.610 "reset": true, 00:18:51.610 "compare": false, 00:18:51.610 "compare_and_write": false, 00:18:51.610 "abort": false, 00:18:51.610 "nvme_admin": false, 00:18:51.610 "nvme_io": false 00:18:51.610 }, 00:18:51.610 "driver_specific": { 00:18:51.610 "lvol": { 00:18:51.610 "lvol_store_uuid": "edd42c08-49b9-46ae-a2bc-a696f18c5775", 00:18:51.610 "base_bdev": "aio_bdev", 00:18:51.610 "thin_provision": false, 00:18:51.610 "snapshot": false, 00:18:51.610 "clone": false, 00:18:51.610 "esnap_clone": false 00:18:51.610 } 00:18:51.610 } 00:18:51.610 } 00:18:51.610 ] 00:18:51.610 13:29:48 -- common/autotest_common.sh@895 -- # return 0 00:18:51.610 13:29:48 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:51.610 13:29:48 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:51.610 13:29:49 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:51.610 13:29:49 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:51.610 13:29:49 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:51.871 13:29:49 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:51.871 13:29:49 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:51.871 [2024-07-26 13:29:49.329895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:52.132 13:29:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:52.132 13:29:49 -- common/autotest_common.sh@640 -- # local es=0 00:18:52.132 13:29:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:52.132 13:29:49 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.132 13:29:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:52.132 13:29:49 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.132 13:29:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:52.132 13:29:49 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.132 13:29:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:52.132 13:29:49 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:52.132 13:29:49 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:52.132 13:29:49 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:52.132 request: 00:18:52.132 { 00:18:52.132 "uuid": "edd42c08-49b9-46ae-a2bc-a696f18c5775", 00:18:52.132 "method": "bdev_lvol_get_lvstores", 00:18:52.132 "req_id": 1 00:18:52.132 } 00:18:52.132 Got JSON-RPC error response 00:18:52.132 response: 00:18:52.132 { 00:18:52.132 "code": -19, 00:18:52.132 "message": "No such device" 00:18:52.132 } 00:18:52.132 13:29:49 -- common/autotest_common.sh@643 -- # es=1 00:18:52.132 13:29:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:52.132 13:29:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:52.132 13:29:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:52.132 13:29:49 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:52.393 aio_bdev 00:18:52.393 13:29:49 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:52.393 13:29:49 -- common/autotest_common.sh@887 -- # local bdev_name=4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:52.393 13:29:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:52.393 13:29:49 -- common/autotest_common.sh@889 -- # local i 00:18:52.394 13:29:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:52.394 13:29:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:52.394 13:29:49 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:52.394 13:29:49 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4af6de93-7677-427b-a96b-3a36e1fd5014 -t 2000 00:18:52.655 [ 00:18:52.655 { 00:18:52.655 "name": "4af6de93-7677-427b-a96b-3a36e1fd5014", 00:18:52.655 "aliases": [ 00:18:52.655 "lvs/lvol" 00:18:52.655 ], 00:18:52.655 "product_name": "Logical Volume", 00:18:52.655 "block_size": 4096, 00:18:52.655 "num_blocks": 38912, 00:18:52.655 "uuid": "4af6de93-7677-427b-a96b-3a36e1fd5014", 00:18:52.655 "assigned_rate_limits": { 00:18:52.655 "rw_ios_per_sec": 0, 00:18:52.655 "rw_mbytes_per_sec": 0, 00:18:52.655 "r_mbytes_per_sec": 0, 00:18:52.655 "w_mbytes_per_sec": 0 00:18:52.655 }, 00:18:52.655 "claimed": false, 00:18:52.655 "zoned": false, 00:18:52.655 "supported_io_types": { 00:18:52.655 "read": true, 00:18:52.655 "write": true, 00:18:52.655 "unmap": true, 00:18:52.655 "write_zeroes": true, 00:18:52.655 "flush": false, 00:18:52.655 "reset": true, 00:18:52.655 "compare": false, 00:18:52.655 "compare_and_write": false, 00:18:52.655 "abort": false, 00:18:52.655 "nvme_admin": false, 00:18:52.655 "nvme_io": false 00:18:52.655 }, 00:18:52.655 "driver_specific": { 00:18:52.655 "lvol": { 00:18:52.655 "lvol_store_uuid": "edd42c08-49b9-46ae-a2bc-a696f18c5775", 00:18:52.655 "base_bdev": "aio_bdev", 00:18:52.655 "thin_provision": false, 00:18:52.655 "snapshot": false, 00:18:52.655 "clone": false, 00:18:52.655 "esnap_clone": false 00:18:52.655 } 00:18:52.655 } 00:18:52.655 } 00:18:52.655 ] 00:18:52.655 13:29:49 -- common/autotest_common.sh@895 -- # return 0 00:18:52.655 13:29:49 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:52.655 13:29:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:52.655 13:29:50 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:52.917 13:29:50 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:52.917 13:29:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:52.917 13:29:50 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:52.917 13:29:50 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4af6de93-7677-427b-a96b-3a36e1fd5014 00:18:53.178 13:29:50 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u edd42c08-49b9-46ae-a2bc-a696f18c5775 00:18:53.178 13:29:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:53.439 13:29:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:53.439 00:18:53.439 real 0m16.637s 00:18:53.439 user 0m43.420s 00:18:53.439 sys 0m3.048s 00:18:53.439 13:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.439 13:29:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.439 ************************************ 00:18:53.439 END TEST lvs_grow_dirty 00:18:53.439 ************************************ 00:18:53.439 13:29:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:53.439 13:29:50 -- common/autotest_common.sh@796 -- # type=--id 00:18:53.439 13:29:50 -- common/autotest_common.sh@797 -- # id=0 00:18:53.439 13:29:50 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:53.439 13:29:50 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.439 13:29:50 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:53.439 13:29:50 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:53.439 13:29:50 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:53.439 13:29:50 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.439 nvmf_trace.0 00:18:53.439 13:29:50 -- common/autotest_common.sh@811 -- # return 0 00:18:53.439 13:29:50 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:53.439 13:29:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:53.439 13:29:50 -- nvmf/common.sh@116 -- # sync 00:18:53.439 13:29:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:53.439 13:29:50 -- nvmf/common.sh@119 -- # set +e 00:18:53.439 13:29:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:53.439 13:29:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:53.439 rmmod nvme_tcp 00:18:53.439 rmmod nvme_fabrics 00:18:53.439 rmmod nvme_keyring 00:18:53.439 13:29:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:53.439 13:29:50 -- nvmf/common.sh@123 -- # set -e 00:18:53.439 13:29:50 -- nvmf/common.sh@124 -- # return 0 00:18:53.439 13:29:50 -- nvmf/common.sh@477 -- # '[' -n 950664 ']' 00:18:53.439 13:29:50 -- nvmf/common.sh@478 -- # killprocess 950664 00:18:53.439 13:29:50 -- common/autotest_common.sh@926 -- # '[' -z 950664 ']' 00:18:53.439 13:29:50 -- common/autotest_common.sh@930 -- # kill -0 950664 00:18:53.439 13:29:50 -- common/autotest_common.sh@931 -- # uname 00:18:53.700 13:29:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:53.700 13:29:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 950664 00:18:53.700 13:29:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:53.700 13:29:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:53.700 13:29:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 950664' 00:18:53.700 killing process with pid 950664 00:18:53.700 13:29:50 -- common/autotest_common.sh@945 -- # kill 950664 00:18:53.700 13:29:50 -- common/autotest_common.sh@950 -- # wait 950664 00:18:53.700 13:29:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:53.700 13:29:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:53.701 13:29:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:53.701 13:29:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.701 13:29:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:53.701 13:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.701 13:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.701 13:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.248 13:29:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:56.248 00:18:56.248 real 0m42.490s 00:18:56.248 user 1m3.974s 00:18:56.248 sys 0m10.019s 00:18:56.248 13:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.248 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:56.248 ************************************ 00:18:56.248 END TEST nvmf_lvs_grow 00:18:56.248 ************************************ 00:18:56.248 13:29:53 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:56.248 13:29:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:56.248 13:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:56.248 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:18:56.248 ************************************ 00:18:56.248 START TEST nvmf_bdev_io_wait 00:18:56.248 ************************************ 00:18:56.248 13:29:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:56.248 * Looking for test storage... 00:18:56.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.248 13:29:53 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.248 13:29:53 -- nvmf/common.sh@7 -- # uname -s 00:18:56.248 13:29:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.248 13:29:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.248 13:29:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.248 13:29:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.248 13:29:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.248 13:29:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.248 13:29:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.248 13:29:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.248 13:29:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.248 13:29:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.248 13:29:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.248 13:29:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.248 13:29:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.248 13:29:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.248 13:29:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.248 13:29:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.248 13:29:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.248 13:29:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.248 13:29:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.248 13:29:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.248 13:29:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.248 13:29:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.248 13:29:53 -- paths/export.sh@5 -- # export PATH 00:18:56.248 13:29:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.248 13:29:53 -- nvmf/common.sh@46 -- # : 0 00:18:56.248 13:29:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:56.248 13:29:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:56.248 13:29:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:56.248 13:29:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.248 13:29:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.248 13:29:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:56.248 13:29:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:56.248 13:29:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:56.248 13:29:53 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.248 13:29:53 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.248 13:29:53 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:56.248 13:29:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:56.248 13:29:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.248 13:29:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:56.248 13:29:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:56.248 13:29:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:56.248 13:29:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.248 13:29:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.248 13:29:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.248 13:29:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:56.248 13:29:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:56.248 13:29:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:56.248 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:02.838 13:30:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:02.838 13:30:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:02.838 13:30:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:02.838 13:30:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:02.838 13:30:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:02.838 13:30:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:02.838 13:30:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:02.838 13:30:00 -- nvmf/common.sh@294 -- # net_devs=() 00:19:02.838 13:30:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:02.838 13:30:00 -- nvmf/common.sh@295 -- # e810=() 00:19:02.838 13:30:00 -- nvmf/common.sh@295 -- # local -ga e810 00:19:02.838 13:30:00 -- nvmf/common.sh@296 -- # x722=() 00:19:02.838 13:30:00 -- nvmf/common.sh@296 -- # local -ga x722 00:19:02.838 13:30:00 -- nvmf/common.sh@297 -- # mlx=() 00:19:02.838 13:30:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:02.838 13:30:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.838 13:30:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:02.838 13:30:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:02.838 13:30:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.838 13:30:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:02.838 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:02.838 13:30:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:02.838 13:30:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:02.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:02.838 13:30:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.838 13:30:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.838 13:30:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.838 13:30:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:02.838 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:02.838 13:30:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.838 13:30:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:02.838 13:30:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.838 13:30:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.838 13:30:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:02.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:02.838 13:30:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.838 13:30:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:02.838 13:30:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:02.838 13:30:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:02.838 13:30:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.838 13:30:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.838 13:30:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.838 13:30:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:02.838 13:30:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.838 13:30:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.838 13:30:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:02.838 13:30:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.838 13:30:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.838 13:30:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:02.839 13:30:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:02.839 13:30:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.839 13:30:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.839 13:30:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.839 13:30:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.839 13:30:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:02.839 13:30:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.115 13:30:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.115 13:30:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.115 13:30:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:03.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:19:03.115 00:19:03.115 --- 10.0.0.2 ping statistics --- 00:19:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.115 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:19:03.115 13:30:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:03.115 00:19:03.115 --- 10.0.0.1 ping statistics --- 00:19:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.115 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:03.115 13:30:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.115 13:30:00 -- nvmf/common.sh@410 -- # return 0 00:19:03.115 13:30:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:03.115 13:30:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.115 13:30:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:03.115 13:30:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:03.115 13:30:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.115 13:30:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:03.115 13:30:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:03.115 13:30:00 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:03.115 13:30:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:03.115 13:30:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:03.115 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.115 13:30:00 -- nvmf/common.sh@469 -- # nvmfpid=955458 00:19:03.115 13:30:00 -- nvmf/common.sh@470 -- # waitforlisten 955458 00:19:03.115 13:30:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:03.115 13:30:00 -- common/autotest_common.sh@819 -- # '[' -z 955458 ']' 00:19:03.115 13:30:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.115 13:30:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:03.115 13:30:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.115 13:30:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:03.115 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.115 [2024-07-26 13:30:00.521473] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:03.115 [2024-07-26 13:30:00.521524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.115 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.376 [2024-07-26 13:30:00.589730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.376 [2024-07-26 13:30:00.620304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:03.376 [2024-07-26 13:30:00.620443] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.376 [2024-07-26 13:30:00.620456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.376 [2024-07-26 13:30:00.620465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.376 [2024-07-26 13:30:00.620601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.376 [2024-07-26 13:30:00.620615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.376 [2024-07-26 13:30:00.620748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.376 [2024-07-26 13:30:00.620749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.948 13:30:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:03.948 13:30:01 -- common/autotest_common.sh@852 -- # return 0 00:19:03.948 13:30:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:03.948 13:30:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:03.948 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:03.949 13:30:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.949 13:30:01 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:03.949 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.949 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:03.949 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.949 13:30:01 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:03.949 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.949 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:03.949 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.949 13:30:01 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:03.949 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.949 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:03.949 [2024-07-26 13:30:01.384869] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.949 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:03.949 13:30:01 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:03.949 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:03.949 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 Malloc0 00:19:04.210 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:04.210 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.210 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.210 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.210 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.210 13:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:04.210 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 [2024-07-26 13:30:01.457481] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.210 13:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=955828 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=955831 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:04.210 13:30:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:04.210 13:30:01 -- nvmf/common.sh@520 -- # config=() 00:19:04.210 13:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.210 13:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.210 13:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.210 { 00:19:04.210 "params": { 00:19:04.210 "name": "Nvme$subsystem", 00:19:04.211 "trtype": "$TEST_TRANSPORT", 00:19:04.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "$NVMF_PORT", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.211 "hdgst": ${hdgst:-false}, 00:19:04.211 "ddgst": ${ddgst:-false} 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 } 00:19:04.211 EOF 00:19:04.211 )") 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=955834 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # config=() 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.211 13:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=955837 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.211 { 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme$subsystem", 00:19:04.211 "trtype": "$TEST_TRANSPORT", 00:19:04.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "$NVMF_PORT", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.211 "hdgst": ${hdgst:-false}, 00:19:04.211 "ddgst": ${ddgst:-false} 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 } 00:19:04.211 EOF 00:19:04.211 )") 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@35 -- # sync 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # cat 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # config=() 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.211 13:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.211 { 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme$subsystem", 00:19:04.211 "trtype": "$TEST_TRANSPORT", 00:19:04.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "$NVMF_PORT", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.211 "hdgst": ${hdgst:-false}, 00:19:04.211 "ddgst": ${ddgst:-false} 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 } 00:19:04.211 EOF 00:19:04.211 )") 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # config=() 00:19:04.211 13:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # cat 00:19:04.211 13:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:04.211 { 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme$subsystem", 00:19:04.211 "trtype": "$TEST_TRANSPORT", 00:19:04.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "$NVMF_PORT", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.211 "hdgst": ${hdgst:-false}, 00:19:04.211 "ddgst": ${ddgst:-false} 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 } 00:19:04.211 EOF 00:19:04.211 )") 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # cat 00:19:04.211 13:30:01 -- target/bdev_io_wait.sh@37 -- # wait 955828 00:19:04.211 13:30:01 -- nvmf/common.sh@542 -- # cat 00:19:04.211 13:30:01 -- nvmf/common.sh@544 -- # jq . 00:19:04.211 13:30:01 -- nvmf/common.sh@544 -- # jq . 00:19:04.211 13:30:01 -- nvmf/common.sh@544 -- # jq . 00:19:04.211 13:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.211 13:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme1", 00:19:04.211 "trtype": "tcp", 00:19:04.211 "traddr": "10.0.0.2", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "4420", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.211 "hdgst": false, 00:19:04.211 "ddgst": false 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 }' 00:19:04.211 13:30:01 -- nvmf/common.sh@544 -- # jq . 00:19:04.211 13:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.211 13:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme1", 00:19:04.211 "trtype": "tcp", 00:19:04.211 "traddr": "10.0.0.2", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "4420", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.211 "hdgst": false, 00:19:04.211 "ddgst": false 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 }' 00:19:04.211 13:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.211 13:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme1", 00:19:04.211 "trtype": "tcp", 00:19:04.211 "traddr": "10.0.0.2", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "4420", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.211 "hdgst": false, 00:19:04.211 "ddgst": false 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 }' 00:19:04.211 13:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:19:04.211 13:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:04.211 "params": { 00:19:04.211 "name": "Nvme1", 00:19:04.211 "trtype": "tcp", 00:19:04.211 "traddr": "10.0.0.2", 00:19:04.211 "adrfam": "ipv4", 00:19:04.211 "trsvcid": "4420", 00:19:04.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.211 "hdgst": false, 00:19:04.211 "ddgst": false 00:19:04.211 }, 00:19:04.211 "method": "bdev_nvme_attach_controller" 00:19:04.211 }' 00:19:04.211 [2024-07-26 13:30:01.508239] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:04.211 [2024-07-26 13:30:01.508291] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:04.211 [2024-07-26 13:30:01.509619] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:04.211 [2024-07-26 13:30:01.509620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:04.211 [2024-07-26 13:30:01.509668] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 13:30:01.509669] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:04.211 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:04.211 [2024-07-26 13:30:01.511235] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:04.211 [2024-07-26 13:30:01.511279] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:04.211 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.211 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.211 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.211 [2024-07-26 13:30:01.656796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.211 [2024-07-26 13:30:01.674155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:04.473 [2024-07-26 13:30:01.701576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.473 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.473 [2024-07-26 13:30:01.716943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:04.473 [2024-07-26 13:30:01.748770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.473 [2024-07-26 13:30:01.764645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:04.473 [2024-07-26 13:30:01.796565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.473 [2024-07-26 13:30:01.812999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:04.473 Running I/O for 1 seconds... 00:19:04.473 Running I/O for 1 seconds... 00:19:04.734 Running I/O for 1 seconds... 00:19:04.734 Running I/O for 1 seconds... 00:19:05.679 00:19:05.679 Latency(us) 00:19:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.679 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:05.679 Nvme1n1 : 1.00 13116.22 51.24 0.00 0.00 9730.92 5079.04 19770.03 00:19:05.679 =================================================================================================================== 00:19:05.679 Total : 13116.22 51.24 0.00 0.00 9730.92 5079.04 19770.03 00:19:05.679 00:19:05.679 Latency(us) 00:19:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.679 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:05.679 Nvme1n1 : 1.01 13215.53 51.62 0.00 0.00 9655.93 5079.04 17694.72 00:19:05.679 =================================================================================================================== 00:19:05.679 Total : 13215.53 51.62 0.00 0.00 9655.93 5079.04 17694.72 00:19:05.679 00:19:05.679 Latency(us) 00:19:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.679 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:05.679 Nvme1n1 : 1.00 17591.45 68.72 0.00 0.00 7260.87 3399.68 26323.63 00:19:05.679 =================================================================================================================== 00:19:05.679 Total : 17591.45 68.72 0.00 0.00 7260.87 3399.68 26323.63 00:19:05.679 00:19:05.679 Latency(us) 00:19:05.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.679 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:05.679 Nvme1n1 : 1.00 191889.64 749.57 0.00 0.00 663.98 266.24 744.11 00:19:05.679 =================================================================================================================== 00:19:05.679 Total : 191889.64 749.57 0.00 0.00 663.98 266.24 744.11 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@38 -- # wait 955831 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@39 -- # wait 955834 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@40 -- # wait 955837 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.941 13:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.941 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.941 13:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:05.941 13:30:03 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:05.941 13:30:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.941 13:30:03 -- nvmf/common.sh@116 -- # sync 00:19:05.941 13:30:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:05.941 13:30:03 -- nvmf/common.sh@119 -- # set +e 00:19:05.941 13:30:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.941 13:30:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:05.941 rmmod nvme_tcp 00:19:05.941 rmmod nvme_fabrics 00:19:05.941 rmmod nvme_keyring 00:19:05.941 13:30:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.941 13:30:03 -- nvmf/common.sh@123 -- # set -e 00:19:05.941 13:30:03 -- nvmf/common.sh@124 -- # return 0 00:19:05.941 13:30:03 -- nvmf/common.sh@477 -- # '[' -n 955458 ']' 00:19:05.941 13:30:03 -- nvmf/common.sh@478 -- # killprocess 955458 00:19:05.941 13:30:03 -- common/autotest_common.sh@926 -- # '[' -z 955458 ']' 00:19:05.941 13:30:03 -- common/autotest_common.sh@930 -- # kill -0 955458 00:19:05.941 13:30:03 -- common/autotest_common.sh@931 -- # uname 00:19:05.941 13:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:05.941 13:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 955458 00:19:05.941 13:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:05.941 13:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:05.941 13:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 955458' 00:19:05.941 killing process with pid 955458 00:19:05.941 13:30:03 -- common/autotest_common.sh@945 -- # kill 955458 00:19:05.941 13:30:03 -- common/autotest_common.sh@950 -- # wait 955458 00:19:06.202 13:30:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:06.202 13:30:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:06.202 13:30:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:06.202 13:30:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.202 13:30:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:06.202 13:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.202 13:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.202 13:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.120 13:30:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:08.120 00:19:08.120 real 0m12.370s 00:19:08.120 user 0m18.660s 00:19:08.120 sys 0m6.657s 00:19:08.120 13:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.120 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:08.120 ************************************ 00:19:08.120 END TEST nvmf_bdev_io_wait 00:19:08.120 ************************************ 00:19:08.382 13:30:05 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:08.382 13:30:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:08.382 13:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.382 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:08.382 ************************************ 00:19:08.382 START TEST nvmf_queue_depth 00:19:08.382 ************************************ 00:19:08.382 13:30:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:08.382 * Looking for test storage... 00:19:08.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.382 13:30:05 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.382 13:30:05 -- nvmf/common.sh@7 -- # uname -s 00:19:08.382 13:30:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.382 13:30:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.382 13:30:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.382 13:30:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.382 13:30:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.382 13:30:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.382 13:30:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.382 13:30:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.382 13:30:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.382 13:30:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.382 13:30:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.382 13:30:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.382 13:30:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.382 13:30:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.382 13:30:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.382 13:30:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.382 13:30:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.382 13:30:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.382 13:30:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.382 13:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.382 13:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.382 13:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.382 13:30:05 -- paths/export.sh@5 -- # export PATH 00:19:08.382 13:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.382 13:30:05 -- nvmf/common.sh@46 -- # : 0 00:19:08.382 13:30:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.382 13:30:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.382 13:30:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.382 13:30:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.382 13:30:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.382 13:30:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.382 13:30:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.382 13:30:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.382 13:30:05 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:08.382 13:30:05 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:08.382 13:30:05 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.382 13:30:05 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:08.382 13:30:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.382 13:30:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.382 13:30:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.382 13:30:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.382 13:30:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.382 13:30:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.382 13:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.382 13:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.382 13:30:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:08.382 13:30:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.382 13:30:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.382 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:15.033 13:30:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:15.033 13:30:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:15.033 13:30:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:15.033 13:30:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:15.033 13:30:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:15.033 13:30:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:15.033 13:30:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:15.033 13:30:12 -- nvmf/common.sh@294 -- # net_devs=() 00:19:15.033 13:30:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:15.033 13:30:12 -- nvmf/common.sh@295 -- # e810=() 00:19:15.033 13:30:12 -- nvmf/common.sh@295 -- # local -ga e810 00:19:15.033 13:30:12 -- nvmf/common.sh@296 -- # x722=() 00:19:15.033 13:30:12 -- nvmf/common.sh@296 -- # local -ga x722 00:19:15.033 13:30:12 -- nvmf/common.sh@297 -- # mlx=() 00:19:15.033 13:30:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:15.033 13:30:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.033 13:30:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:15.033 13:30:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:15.033 13:30:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:15.033 13:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:15.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:15.033 13:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:15.033 13:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:15.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:15.033 13:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:15.033 13:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.033 13:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.033 13:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:15.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:15.033 13:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.033 13:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:15.033 13:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.033 13:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.033 13:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:15.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:15.033 13:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.033 13:30:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:15.033 13:30:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:15.033 13:30:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:15.033 13:30:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.033 13:30:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.033 13:30:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.033 13:30:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:15.033 13:30:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.033 13:30:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.033 13:30:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:15.033 13:30:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.033 13:30:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.033 13:30:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:15.033 13:30:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:15.033 13:30:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.033 13:30:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.295 13:30:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.295 13:30:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.295 13:30:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:15.295 13:30:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.295 13:30:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.295 13:30:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.295 13:30:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:15.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:19:15.295 00:19:15.295 --- 10.0.0.2 ping statistics --- 00:19:15.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.295 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:19:15.295 13:30:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.488 ms 00:19:15.558 00:19:15.558 --- 10.0.0.1 ping statistics --- 00:19:15.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.558 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:19:15.558 13:30:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.558 13:30:12 -- nvmf/common.sh@410 -- # return 0 00:19:15.558 13:30:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:15.558 13:30:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.558 13:30:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:15.558 13:30:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:15.558 13:30:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.558 13:30:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:15.558 13:30:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:15.558 13:30:12 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:15.558 13:30:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:15.558 13:30:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:15.558 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 13:30:12 -- nvmf/common.sh@469 -- # nvmfpid=960723 00:19:15.558 13:30:12 -- nvmf/common.sh@470 -- # waitforlisten 960723 00:19:15.558 13:30:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.558 13:30:12 -- common/autotest_common.sh@819 -- # '[' -z 960723 ']' 00:19:15.558 13:30:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.558 13:30:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:15.558 13:30:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.558 13:30:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:15.558 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:19:15.558 [2024-07-26 13:30:12.861597] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:15.558 [2024-07-26 13:30:12.861655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.558 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.558 [2024-07-26 13:30:12.946949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.558 [2024-07-26 13:30:12.991469] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:15.558 [2024-07-26 13:30:12.991617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.558 [2024-07-26 13:30:12.991626] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.558 [2024-07-26 13:30:12.991635] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.558 [2024-07-26 13:30:12.991668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.520 13:30:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:16.520 13:30:13 -- common/autotest_common.sh@852 -- # return 0 00:19:16.520 13:30:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:16.520 13:30:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 13:30:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.520 13:30:13 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.520 13:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 [2024-07-26 13:30:13.691385] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.520 13:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.520 13:30:13 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:16.520 13:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 Malloc0 00:19:16.520 13:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.520 13:30:13 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:16.520 13:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 13:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.520 13:30:13 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.520 13:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 13:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.520 13:30:13 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.520 13:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.520 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 [2024-07-26 13:30:13.767933] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.520 13:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.520 13:30:13 -- target/queue_depth.sh@30 -- # bdevperf_pid=960987 00:19:16.520 13:30:13 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.520 13:30:13 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:16.520 13:30:13 -- target/queue_depth.sh@33 -- # waitforlisten 960987 /var/tmp/bdevperf.sock 00:19:16.520 13:30:13 -- common/autotest_common.sh@819 -- # '[' -z 960987 ']' 00:19:16.520 13:30:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.521 13:30:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.521 13:30:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.521 13:30:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.521 13:30:13 -- common/autotest_common.sh@10 -- # set +x 00:19:16.521 [2024-07-26 13:30:13.820125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:16.521 [2024-07-26 13:30:13.820180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960987 ] 00:19:16.521 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.521 [2024-07-26 13:30:13.881582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.521 [2024-07-26 13:30:13.915722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.464 13:30:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.464 13:30:14 -- common/autotest_common.sh@852 -- # return 0 00:19:17.464 13:30:14 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:17.464 13:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.464 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 NVMe0n1 00:19:17.464 13:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.464 13:30:14 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:17.464 Running I/O for 10 seconds... 00:19:27.469 00:19:27.469 Latency(us) 00:19:27.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:27.469 Verification LBA range: start 0x0 length 0x4000 00:19:27.469 NVMe0n1 : 10.04 18140.78 70.86 0.00 0.00 56291.11 9393.49 60293.12 00:19:27.469 =================================================================================================================== 00:19:27.469 Total : 18140.78 70.86 0.00 0.00 56291.11 9393.49 60293.12 00:19:27.469 0 00:19:27.469 13:30:24 -- target/queue_depth.sh@39 -- # killprocess 960987 00:19:27.469 13:30:24 -- common/autotest_common.sh@926 -- # '[' -z 960987 ']' 00:19:27.469 13:30:24 -- common/autotest_common.sh@930 -- # kill -0 960987 00:19:27.469 13:30:24 -- common/autotest_common.sh@931 -- # uname 00:19:27.469 13:30:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.469 13:30:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 960987 00:19:27.730 13:30:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:27.730 13:30:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:27.730 13:30:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 960987' 00:19:27.730 killing process with pid 960987 00:19:27.730 13:30:24 -- common/autotest_common.sh@945 -- # kill 960987 00:19:27.730 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.730 00:19:27.730 Latency(us) 00:19:27.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.730 =================================================================================================================== 00:19:27.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.730 13:30:24 -- common/autotest_common.sh@950 -- # wait 960987 00:19:27.730 13:30:25 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:27.730 13:30:25 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:27.730 13:30:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:27.730 13:30:25 -- nvmf/common.sh@116 -- # sync 00:19:27.730 13:30:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:27.730 13:30:25 -- nvmf/common.sh@119 -- # set +e 00:19:27.730 13:30:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:27.730 13:30:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:27.730 rmmod nvme_tcp 00:19:27.730 rmmod nvme_fabrics 00:19:27.730 rmmod nvme_keyring 00:19:27.730 13:30:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:27.730 13:30:25 -- nvmf/common.sh@123 -- # set -e 00:19:27.730 13:30:25 -- nvmf/common.sh@124 -- # return 0 00:19:27.730 13:30:25 -- nvmf/common.sh@477 -- # '[' -n 960723 ']' 00:19:27.730 13:30:25 -- nvmf/common.sh@478 -- # killprocess 960723 00:19:27.730 13:30:25 -- common/autotest_common.sh@926 -- # '[' -z 960723 ']' 00:19:27.730 13:30:25 -- common/autotest_common.sh@930 -- # kill -0 960723 00:19:27.730 13:30:25 -- common/autotest_common.sh@931 -- # uname 00:19:27.730 13:30:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.730 13:30:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 960723 00:19:27.991 13:30:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:27.991 13:30:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:27.991 13:30:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 960723' 00:19:27.991 killing process with pid 960723 00:19:27.991 13:30:25 -- common/autotest_common.sh@945 -- # kill 960723 00:19:27.991 13:30:25 -- common/autotest_common.sh@950 -- # wait 960723 00:19:27.991 13:30:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:27.991 13:30:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:27.991 13:30:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:27.991 13:30:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.991 13:30:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:27.991 13:30:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.991 13:30:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.991 13:30:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.536 13:30:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:30.536 00:19:30.536 real 0m21.787s 00:19:30.536 user 0m25.415s 00:19:30.536 sys 0m6.397s 00:19:30.536 13:30:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.536 13:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 ************************************ 00:19:30.536 END TEST nvmf_queue_depth 00:19:30.536 ************************************ 00:19:30.536 13:30:27 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:30.536 13:30:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:30.536 13:30:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:30.536 13:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:30.536 ************************************ 00:19:30.536 START TEST nvmf_multipath 00:19:30.536 ************************************ 00:19:30.536 13:30:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:30.536 * Looking for test storage... 00:19:30.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.536 13:30:27 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.536 13:30:27 -- nvmf/common.sh@7 -- # uname -s 00:19:30.536 13:30:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.536 13:30:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.536 13:30:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.536 13:30:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.536 13:30:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.536 13:30:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.536 13:30:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.536 13:30:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.536 13:30:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.537 13:30:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.537 13:30:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.537 13:30:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.537 13:30:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.537 13:30:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.537 13:30:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.537 13:30:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.537 13:30:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.537 13:30:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.537 13:30:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.537 13:30:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.537 13:30:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.537 13:30:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.537 13:30:27 -- paths/export.sh@5 -- # export PATH 00:19:30.537 13:30:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.537 13:30:27 -- nvmf/common.sh@46 -- # : 0 00:19:30.537 13:30:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:30.537 13:30:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:30.537 13:30:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:30.537 13:30:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.537 13:30:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.537 13:30:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:30.537 13:30:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:30.537 13:30:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:30.537 13:30:27 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:30.537 13:30:27 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:30.537 13:30:27 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:30.537 13:30:27 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:30.537 13:30:27 -- target/multipath.sh@43 -- # nvmftestinit 00:19:30.537 13:30:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:30.537 13:30:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.537 13:30:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:30.537 13:30:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:30.537 13:30:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:30.537 13:30:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.537 13:30:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.537 13:30:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.537 13:30:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:30.537 13:30:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:30.537 13:30:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:30.537 13:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:37.154 13:30:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:37.154 13:30:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:37.154 13:30:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:37.154 13:30:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:37.154 13:30:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:37.154 13:30:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:37.154 13:30:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:37.154 13:30:34 -- nvmf/common.sh@294 -- # net_devs=() 00:19:37.154 13:30:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:37.154 13:30:34 -- nvmf/common.sh@295 -- # e810=() 00:19:37.154 13:30:34 -- nvmf/common.sh@295 -- # local -ga e810 00:19:37.154 13:30:34 -- nvmf/common.sh@296 -- # x722=() 00:19:37.154 13:30:34 -- nvmf/common.sh@296 -- # local -ga x722 00:19:37.154 13:30:34 -- nvmf/common.sh@297 -- # mlx=() 00:19:37.154 13:30:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:37.154 13:30:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.154 13:30:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:37.154 13:30:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:37.154 13:30:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.154 13:30:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:37.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:37.154 13:30:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:37.154 13:30:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:37.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:37.154 13:30:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.154 13:30:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.154 13:30:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.154 13:30:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:37.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:37.154 13:30:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.154 13:30:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:37.154 13:30:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.154 13:30:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.154 13:30:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:37.154 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:37.154 13:30:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.154 13:30:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:37.154 13:30:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:37.154 13:30:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:37.154 13:30:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.154 13:30:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.154 13:30:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.154 13:30:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:37.154 13:30:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.154 13:30:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.154 13:30:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:37.154 13:30:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.154 13:30:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.154 13:30:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:37.154 13:30:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:37.154 13:30:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.154 13:30:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.154 13:30:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.154 13:30:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.154 13:30:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:37.154 13:30:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.154 13:30:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.154 13:30:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.416 13:30:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:37.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:19:37.416 00:19:37.416 --- 10.0.0.2 ping statistics --- 00:19:37.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.416 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:19:37.416 13:30:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:19:37.416 00:19:37.416 --- 10.0.0.1 ping statistics --- 00:19:37.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.416 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:19:37.416 13:30:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.416 13:30:34 -- nvmf/common.sh@410 -- # return 0 00:19:37.416 13:30:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:37.416 13:30:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.416 13:30:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:37.416 13:30:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:37.416 13:30:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.416 13:30:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:37.416 13:30:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:37.416 13:30:34 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:37.416 13:30:34 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:37.416 only one NIC for nvmf test 00:19:37.416 13:30:34 -- target/multipath.sh@47 -- # nvmftestfini 00:19:37.416 13:30:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:37.416 13:30:34 -- nvmf/common.sh@116 -- # sync 00:19:37.416 13:30:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:37.416 13:30:34 -- nvmf/common.sh@119 -- # set +e 00:19:37.416 13:30:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:37.416 13:30:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:37.416 rmmod nvme_tcp 00:19:37.416 rmmod nvme_fabrics 00:19:37.416 rmmod nvme_keyring 00:19:37.416 13:30:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:37.416 13:30:34 -- nvmf/common.sh@123 -- # set -e 00:19:37.416 13:30:34 -- nvmf/common.sh@124 -- # return 0 00:19:37.416 13:30:34 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:37.416 13:30:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:37.416 13:30:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:37.416 13:30:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:37.416 13:30:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.416 13:30:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:37.416 13:30:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.416 13:30:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.416 13:30:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.967 13:30:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.967 13:30:36 -- target/multipath.sh@48 -- # exit 0 00:19:39.967 13:30:36 -- target/multipath.sh@1 -- # nvmftestfini 00:19:39.967 13:30:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.967 13:30:36 -- nvmf/common.sh@116 -- # sync 00:19:39.967 13:30:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:39.967 13:30:36 -- nvmf/common.sh@119 -- # set +e 00:19:39.967 13:30:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.967 13:30:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:39.967 13:30:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.967 13:30:36 -- nvmf/common.sh@123 -- # set -e 00:19:39.967 13:30:36 -- nvmf/common.sh@124 -- # return 0 00:19:39.967 13:30:36 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:39.967 13:30:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.967 13:30:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.967 13:30:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.967 13:30:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.967 13:30:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.967 13:30:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.967 13:30:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.967 13:30:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.967 13:30:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.967 00:19:39.967 real 0m9.438s 00:19:39.967 user 0m2.041s 00:19:39.967 sys 0m5.302s 00:19:39.967 13:30:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.967 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:19:39.967 ************************************ 00:19:39.967 END TEST nvmf_multipath 00:19:39.967 ************************************ 00:19:39.967 13:30:36 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:39.967 13:30:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:39.967 13:30:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.967 13:30:36 -- common/autotest_common.sh@10 -- # set +x 00:19:39.967 ************************************ 00:19:39.967 START TEST nvmf_zcopy 00:19:39.967 ************************************ 00:19:39.967 13:30:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:39.967 * Looking for test storage... 00:19:39.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.967 13:30:37 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.967 13:30:37 -- nvmf/common.sh@7 -- # uname -s 00:19:39.967 13:30:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.967 13:30:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.967 13:30:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.967 13:30:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.967 13:30:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.967 13:30:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.967 13:30:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.967 13:30:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.967 13:30:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.967 13:30:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.967 13:30:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.967 13:30:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.967 13:30:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.967 13:30:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.967 13:30:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.967 13:30:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.967 13:30:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.967 13:30:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.967 13:30:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.967 13:30:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.967 13:30:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.967 13:30:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.967 13:30:37 -- paths/export.sh@5 -- # export PATH 00:19:39.967 13:30:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.967 13:30:37 -- nvmf/common.sh@46 -- # : 0 00:19:39.967 13:30:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.967 13:30:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.967 13:30:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.967 13:30:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.967 13:30:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.967 13:30:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.967 13:30:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.967 13:30:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.967 13:30:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:39.967 13:30:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:39.967 13:30:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.967 13:30:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.967 13:30:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.967 13:30:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.967 13:30:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.967 13:30:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.967 13:30:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.967 13:30:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:39.967 13:30:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:39.967 13:30:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:39.967 13:30:37 -- common/autotest_common.sh@10 -- # set +x 00:19:46.603 13:30:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.603 13:30:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:46.603 13:30:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:46.603 13:30:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:46.603 13:30:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:46.603 13:30:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:46.603 13:30:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:46.603 13:30:43 -- nvmf/common.sh@294 -- # net_devs=() 00:19:46.603 13:30:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:46.603 13:30:43 -- nvmf/common.sh@295 -- # e810=() 00:19:46.603 13:30:43 -- nvmf/common.sh@295 -- # local -ga e810 00:19:46.603 13:30:43 -- nvmf/common.sh@296 -- # x722=() 00:19:46.603 13:30:43 -- nvmf/common.sh@296 -- # local -ga x722 00:19:46.603 13:30:43 -- nvmf/common.sh@297 -- # mlx=() 00:19:46.603 13:30:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:46.603 13:30:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.603 13:30:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:46.603 13:30:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:46.603 13:30:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.603 13:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:46.603 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:46.603 13:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.603 13:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:46.603 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:46.603 13:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.603 13:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.603 13:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.603 13:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:46.603 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:46.603 13:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.603 13:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.603 13:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.603 13:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.603 13:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:46.603 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:46.603 13:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.603 13:30:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:46.603 13:30:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:46.603 13:30:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:46.603 13:30:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.603 13:30:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.603 13:30:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.603 13:30:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:46.603 13:30:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.603 13:30:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.603 13:30:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:46.603 13:30:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.603 13:30:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.604 13:30:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:46.604 13:30:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:46.604 13:30:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.604 13:30:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.865 13:30:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.865 13:30:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.865 13:30:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:46.865 13:30:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.865 13:30:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.865 13:30:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.865 13:30:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:46.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:19:46.865 00:19:46.865 --- 10.0.0.2 ping statistics --- 00:19:46.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.865 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:19:46.865 13:30:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:19:46.865 00:19:46.865 --- 10.0.0.1 ping statistics --- 00:19:46.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.865 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:19:46.865 13:30:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.865 13:30:44 -- nvmf/common.sh@410 -- # return 0 00:19:46.865 13:30:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.865 13:30:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.865 13:30:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:46.865 13:30:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:46.865 13:30:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.865 13:30:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:46.865 13:30:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.127 13:30:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:47.127 13:30:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:47.127 13:30:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:47.127 13:30:44 -- common/autotest_common.sh@10 -- # set +x 00:19:47.127 13:30:44 -- nvmf/common.sh@469 -- # nvmfpid=971463 00:19:47.127 13:30:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.127 13:30:44 -- nvmf/common.sh@470 -- # waitforlisten 971463 00:19:47.127 13:30:44 -- common/autotest_common.sh@819 -- # '[' -z 971463 ']' 00:19:47.127 13:30:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.127 13:30:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:47.127 13:30:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.127 13:30:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:47.127 13:30:44 -- common/autotest_common.sh@10 -- # set +x 00:19:47.127 [2024-07-26 13:30:44.399027] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:47.127 [2024-07-26 13:30:44.399093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.127 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.127 [2024-07-26 13:30:44.487802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.127 [2024-07-26 13:30:44.531408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:47.127 [2024-07-26 13:30:44.531555] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.127 [2024-07-26 13:30:44.531566] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.127 [2024-07-26 13:30:44.531573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.127 [2024-07-26 13:30:44.531605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.072 13:30:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.072 13:30:45 -- common/autotest_common.sh@852 -- # return 0 00:19:48.072 13:30:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:48.072 13:30:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 13:30:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.072 13:30:45 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:48.072 13:30:45 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 [2024-07-26 13:30:45.229848] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 [2024-07-26 13:30:45.254072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 malloc0 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.072 13:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.072 13:30:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.072 13:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.072 13:30:45 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:48.072 13:30:45 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:48.072 13:30:45 -- nvmf/common.sh@520 -- # config=() 00:19:48.072 13:30:45 -- nvmf/common.sh@520 -- # local subsystem config 00:19:48.072 13:30:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:48.072 13:30:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:48.072 { 00:19:48.072 "params": { 00:19:48.072 "name": "Nvme$subsystem", 00:19:48.072 "trtype": "$TEST_TRANSPORT", 00:19:48.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.072 "adrfam": "ipv4", 00:19:48.072 "trsvcid": "$NVMF_PORT", 00:19:48.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.072 "hdgst": ${hdgst:-false}, 00:19:48.072 "ddgst": ${ddgst:-false} 00:19:48.072 }, 00:19:48.072 "method": "bdev_nvme_attach_controller" 00:19:48.072 } 00:19:48.072 EOF 00:19:48.072 )") 00:19:48.072 13:30:45 -- nvmf/common.sh@542 -- # cat 00:19:48.072 13:30:45 -- nvmf/common.sh@544 -- # jq . 00:19:48.072 13:30:45 -- nvmf/common.sh@545 -- # IFS=, 00:19:48.072 13:30:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:48.072 "params": { 00:19:48.072 "name": "Nvme1", 00:19:48.072 "trtype": "tcp", 00:19:48.072 "traddr": "10.0.0.2", 00:19:48.072 "adrfam": "ipv4", 00:19:48.072 "trsvcid": "4420", 00:19:48.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.072 "hdgst": false, 00:19:48.072 "ddgst": false 00:19:48.072 }, 00:19:48.072 "method": "bdev_nvme_attach_controller" 00:19:48.072 }' 00:19:48.072 [2024-07-26 13:30:45.349325] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:48.072 [2024-07-26 13:30:45.349386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971705 ] 00:19:48.072 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.072 [2024-07-26 13:30:45.412750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.072 [2024-07-26 13:30:45.448860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.334 Running I/O for 10 seconds... 00:20:00.576 00:20:00.576 Latency(us) 00:20:00.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.576 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:00.576 Verification LBA range: start 0x0 length 0x1000 00:20:00.576 Nvme1n1 : 10.05 14363.82 112.22 0.00 0.00 8852.83 2594.13 43690.67 00:20:00.577 =================================================================================================================== 00:20:00.577 Total : 14363.82 112.22 0.00 0.00 8852.83 2594.13 43690.67 00:20:00.577 13:30:55 -- target/zcopy.sh@39 -- # perfpid=973855 00:20:00.577 13:30:55 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:00.577 13:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:00.577 13:30:55 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:00.577 13:30:55 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:00.577 13:30:55 -- nvmf/common.sh@520 -- # config=() 00:20:00.577 13:30:55 -- nvmf/common.sh@520 -- # local subsystem config 00:20:00.577 13:30:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:00.577 13:30:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:00.577 { 00:20:00.577 "params": { 00:20:00.577 "name": "Nvme$subsystem", 00:20:00.577 "trtype": "$TEST_TRANSPORT", 00:20:00.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.577 "adrfam": "ipv4", 00:20:00.577 "trsvcid": "$NVMF_PORT", 00:20:00.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.577 "hdgst": ${hdgst:-false}, 00:20:00.577 "ddgst": ${ddgst:-false} 00:20:00.577 }, 00:20:00.577 "method": "bdev_nvme_attach_controller" 00:20:00.577 } 00:20:00.577 EOF 00:20:00.577 )") 00:20:00.577 13:30:55 -- nvmf/common.sh@542 -- # cat 00:20:00.577 [2024-07-26 13:30:55.942276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:55.942306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 13:30:55 -- nvmf/common.sh@544 -- # jq . 00:20:00.577 13:30:55 -- nvmf/common.sh@545 -- # IFS=, 00:20:00.577 13:30:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:00.577 "params": { 00:20:00.577 "name": "Nvme1", 00:20:00.577 "trtype": "tcp", 00:20:00.577 "traddr": "10.0.0.2", 00:20:00.577 "adrfam": "ipv4", 00:20:00.577 "trsvcid": "4420", 00:20:00.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.577 "hdgst": false, 00:20:00.577 "ddgst": false 00:20:00.577 }, 00:20:00.577 "method": "bdev_nvme_attach_controller" 00:20:00.577 }' 00:20:00.577 [2024-07-26 13:30:55.954270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:55.954279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:55.966298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:55.966306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:55.978331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:55.978338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:55.990360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:55.990367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:55.990478] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:00.577 [2024-07-26 13:30:55.990538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973855 ] 00:20:00.577 [2024-07-26 13:30:56.002391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.002399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.014420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.014428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.577 [2024-07-26 13:30:56.026452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.026460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.038483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.038490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.049210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.577 [2024-07-26 13:30:56.050514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.050522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.062549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.062560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.074579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.074593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.077440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.577 [2024-07-26 13:30:56.086608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.086617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.098647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.098664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.110675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.110686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.122703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.122712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.134776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.134785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.146810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.146821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.158839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.158847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.170872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.170882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.182905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.182914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.194934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.194943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.206968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.206982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 Running I/O for 5 seconds... 00:20:00.577 [2024-07-26 13:30:56.231649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.231666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.248204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.248220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.259488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.259503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.267385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.267400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.275455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.275470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.283966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.283981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.292943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.292959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.301469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.301484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.314648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.314664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.322940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.322955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.577 [2024-07-26 13:30:56.331846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.577 [2024-07-26 13:30:56.331861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.340598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.340613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.349317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.349333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.358152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.358168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.366669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.366684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.375127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.375142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.382861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.382876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.391434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.391448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.400218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.400233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.408764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.408779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.417610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.417624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.426092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.426106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.434495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.434509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.443085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.443099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.451675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.451689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.460372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.460387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.469140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.469155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.477352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.477366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.485853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.485867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.494664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.494678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.503194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.503215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.511831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.511845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.520628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.520642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.529095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.529109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.537629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.537644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.546282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.546297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.554906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.554920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.564066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.564080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.572784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.572798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.581649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.581663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.590441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.590455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.599175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.599189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.607682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.607696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.616454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.616468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.624968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.624983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.633801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.633815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.642516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.642530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.650986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.651000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.660030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.660045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.668660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.668675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.677476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.677491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.686060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.686075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.695027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.695041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.703523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.703537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.712008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.712023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.720699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.720713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.729325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.729339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.737912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.737926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.746594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.746609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.755102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.578 [2024-07-26 13:30:56.755116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.578 [2024-07-26 13:30:56.763426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.763441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.772019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.772033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.780135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.780149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.789149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.789164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.797636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.797650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.806405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.806419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.815020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.815035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.823773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.823788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.832696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.832710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.841206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.841221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.849740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.849754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.858606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.858621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.867309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.867323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.875986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.876002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.884683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.884699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.893547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.893562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.902193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.902212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.911005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.911021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.919841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.919856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.928287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.928302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.936683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.936698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.945396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.945411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.953848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.953863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.962797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.962812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.971540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.971558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.979964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.979979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.988783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.988798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:56.997184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:56.997199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.005329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.005344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.013763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.013778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.022492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.022507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.031108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.031123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.040132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.040147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.048496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.048511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.057372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.057386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.066237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.066252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.074416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.074431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.579 [2024-07-26 13:30:57.083256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.579 [2024-07-26 13:30:57.083271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.092110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.092125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.100957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.100972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.109660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.109676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.118283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.118299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.126764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.126780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.135148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.135166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.143500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.143515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.152265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.152280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.160370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.160385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.168871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.168886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.177217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.177233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.185794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.185809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.193534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.193548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.202437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.202451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.211051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.211066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.219730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.219745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.228214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.228229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.236981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.236996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.245303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.245318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.253836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.253851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.262366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.262381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.270646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.270661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.279171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.279186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.288183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.288197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.296849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.296866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.305620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.305635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.314396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.314410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.323274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.323289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.331916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.331930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.340849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.340864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.349535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.349550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.358422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.358438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.366946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.366961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.580 [2024-07-26 13:30:57.375543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.580 [2024-07-26 13:30:57.375558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.383833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.383849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.392467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.392482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.400858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.400873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.409377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.409392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.417944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.417959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.426290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.426305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.435064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.435078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.443362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.443378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.451876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.451891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.460543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.460565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.468824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.468838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.477489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.477505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.486042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.486057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.494876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.494891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.503763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.503778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.512575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.512591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.520392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.520407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.529111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.529126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.537922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.537937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.546537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.546553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.555134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.555148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.564066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.564081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.572342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.572357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.581256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.581271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.589717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.589732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.598557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.598571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.607353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.607368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.615703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.615718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.624668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.624682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.633449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.633463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.641903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.641917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.650767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.650782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.659158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.659172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.667956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.667970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.676155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.581 [2024-07-26 13:30:57.676170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.581 [2024-07-26 13:30:57.684994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.685008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.693509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.693524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.701823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.701837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.710277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.710291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.719212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.719226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.727299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.727314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.735448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.735463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.744113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.744127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.752263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.752277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.760709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.760724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.769568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.769583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.778184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.778198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.786524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.786539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.794837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.794851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.803483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.803497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.811726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.811741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.820085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.820099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.828845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.828859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.837777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.837792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.846044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.846058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.855058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.855072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.863440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.863454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.872276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.872290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.880679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.880693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.889145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.889159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.897971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.897986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.910464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.910479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.918185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.918199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.926881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.926896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.935698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.935713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.943807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.943822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.952701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.952716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.961536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.961550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.970064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.970078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.582 [2024-07-26 13:30:57.978543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.582 [2024-07-26 13:30:57.978557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:57.987133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:57.987147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:57.995778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:57.995793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:58.004434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:58.004449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:58.013224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:58.013238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:58.021914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:58.021928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:58.030450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:58.030464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.583 [2024-07-26 13:30:58.039305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.583 [2024-07-26 13:30:58.039319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.048019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.048034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.056897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.056911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.065327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.065342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.073997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.074011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.082310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.082325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.090482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.090497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.099246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.099261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.108125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.108139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.116804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.116819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.125509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.125523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.134410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.134425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.142835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.142849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.151374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.151388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.160507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.160521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.168278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.168292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.177073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.177087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.185533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.185547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.194523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.194537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.203293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.203308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.211803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.844 [2024-07-26 13:30:58.211817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.844 [2024-07-26 13:30:58.220313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.220327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.228922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.228937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.237171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.237185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.245940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.245954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.254810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.254825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.263155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.263170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.271557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.271574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.280002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.280016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.288121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.288135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.296559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.296574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.304785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.304799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.845 [2024-07-26 13:30:58.313290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.845 [2024-07-26 13:30:58.313304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.321409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.321424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.329912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.329926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.338374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.338389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.347037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.347052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.355352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.355366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.363299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.363313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.372212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.372227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.384615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.384631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.392359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.392373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.401249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.401265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.408814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.408829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.417929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.417944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.426647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.426662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.434797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.434814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.443508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.443522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.452386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.452400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.461150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.461164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.470016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.470031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.106 [2024-07-26 13:30:58.478634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.106 [2024-07-26 13:30:58.478649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.487453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.487468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.496371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.496386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.504898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.504912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.513382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.513396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.522124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.522138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.530120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.530135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.539101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.539115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.547961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.547976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.556760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.556774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.565155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.565170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.107 [2024-07-26 13:30:58.573817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.107 [2024-07-26 13:30:58.573832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.582554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.582569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.591458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.591473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.600130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.600148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.608541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.608556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.617294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.617309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.625315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.625330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.634275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.634290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.642917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.642932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.651757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.651772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.660105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.660120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.668569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.668584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.677176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.677191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.685408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.685422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.694087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.694101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.702861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.702876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.711393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.711408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.719974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.719989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.728635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.728651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.737082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.737097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.745909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.745924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.753499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.753514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.762263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.762281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.770902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.770917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.779902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.779917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.787970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.787985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.796895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.796910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.805548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.805563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.813281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.813296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.821906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.821923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.368 [2024-07-26 13:30:58.830215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.368 [2024-07-26 13:30:58.830230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.369 [2024-07-26 13:30:58.839155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.369 [2024-07-26 13:30:58.839170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.847742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.847757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.856188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.856208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.864057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.864072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.872942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.872957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.881708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.881723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.890682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.890697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.899090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.899105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.907684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.907699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.916395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.916410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.924762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.924777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.933489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.933503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.942075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.942090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.950685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.950700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.959094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.959109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.967643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.967658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.976454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.976470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.984268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.984283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:58.993609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:58.993624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.001779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.001792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.010791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.010805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.019012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.019027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.027479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.027494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.035674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.035689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.044913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.044928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.053160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.053175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.061573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.061588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.069657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.069672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.077911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.077926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.086682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.086697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.630 [2024-07-26 13:30:59.095124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.630 [2024-07-26 13:30:59.095138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.103961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.103976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.112613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.112627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.119585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.119600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.128837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.128852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.137823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.137838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.145390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.145404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.154208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.154224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.163012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.163027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.171677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.171692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.180466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.180482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.188325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.188341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.197064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.197078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.205705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.205719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.214097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.214112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.892 [2024-07-26 13:30:59.222623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.892 [2024-07-26 13:30:59.222638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.230689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.230704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.239278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.239293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.247626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.247640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.256244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.256258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.264596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.264611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.272661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.272676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.281486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.281501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.289914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.289929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.298785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.298800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.307382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.307396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.315863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.315878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.324138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.324153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.332748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.332763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.341679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.341694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.350468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.350482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.893 [2024-07-26 13:30:59.358935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.893 [2024-07-26 13:30:59.358950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.367918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.367933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.376633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.376648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.385220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.385234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.393838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.393852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.402433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.402447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.411321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.411336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.419874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.419890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.428098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.428112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.436525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.436539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.445164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.445179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.454168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.454182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.463258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.463272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.471305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.471320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.479641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.479656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.487673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.487687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.496311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.496325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.504238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.504252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.512896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.512910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.521286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.521300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.529885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.529900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.538532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.538546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.546917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.546931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.555227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.555242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.563935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.563953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.572332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.572347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.580402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.580416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.589158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.589172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.597616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.597630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.155 [2024-07-26 13:30:59.606463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.155 [2024-07-26 13:30:59.606478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.156 [2024-07-26 13:30:59.614864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.156 [2024-07-26 13:30:59.614879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.156 [2024-07-26 13:30:59.623559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.156 [2024-07-26 13:30:59.623574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.632056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.632071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.640510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.640524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.649230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.649244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.657607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.657621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.666063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.666078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.674964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.674978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.683143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.683157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.692116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.692130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.700634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.700648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.709380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.709395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.717914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.717929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.726529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.726546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.734641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.734655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.743715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.743730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.752515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.752529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.760152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.760166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.768975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.768990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.777283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.777297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.785969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.785983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.416 [2024-07-26 13:30:59.794469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.416 [2024-07-26 13:30:59.794484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.802875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.802889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.811355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.811370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.819611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.819626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.828216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.828231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.836850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.836865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.845235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.845249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.853864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.853878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.862498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.862512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.871056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.871070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.879669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.879684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.417 [2024-07-26 13:30:59.888487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.417 [2024-07-26 13:30:59.888509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.677 [2024-07-26 13:30:59.896957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.896972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.905766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.905780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.914242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.914257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.922646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.922661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.931193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.931212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.940433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.940448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.949149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.949163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.958011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.958025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.975112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.975127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.982637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.982651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:30:59.991749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:30:59.991763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.000356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.000371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.008489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.008504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.017900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.017914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.025891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.025905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.034548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.034562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.042693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.042707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.051068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.051082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.059750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.059768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.068364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.068378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.076982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.076997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.085632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.085646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.094205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.094219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.102392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.102406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.111104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.111118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.119322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.119336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.127519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.127534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.136357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.136372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.678 [2024-07-26 13:31:00.144536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.678 [2024-07-26 13:31:00.144550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.939 [2024-07-26 13:31:00.153296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.939 [2024-07-26 13:31:00.153311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.939 [2024-07-26 13:31:00.162407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.939 [2024-07-26 13:31:00.162422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.939 [2024-07-26 13:31:00.171257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.939 [2024-07-26 13:31:00.171271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.939 [2024-07-26 13:31:00.179542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.939 [2024-07-26 13:31:00.179557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.939 [2024-07-26 13:31:00.187727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.939 [2024-07-26 13:31:00.187741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.195991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.196005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.204694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.204709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.213415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.213430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.222306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.222321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.231280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.231294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.239789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.239803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.248541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.248556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.257362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.257377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.265543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.265557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.274246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.274260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.283210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.283225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.291129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.291144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.300209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.300223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.308830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.308844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.317909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.317924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.326265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.326279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.334953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.334967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.343232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.343248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.351931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.351946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.360231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.360246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.368407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.368421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.377742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.377757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.386384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.386398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.394586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.394600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.403046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.403062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.940 [2024-07-26 13:31:00.411660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.940 [2024-07-26 13:31:00.411675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.201 [2024-07-26 13:31:00.419879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.419894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.428194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.428213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.436467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.436483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.445079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.445093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.453497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.453512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.462290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.462305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.470494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.470508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.479062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.479076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.487475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.487491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.495892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.495907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.504454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.504469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.513143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.513158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.521515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.521529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.527851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.527866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.538377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.538392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.546101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.546116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.555209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.555223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.563060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.563075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.571764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.571779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.579952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.579967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.588251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.588266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.596591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.596606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.605224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.605239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.613911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.613925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.621788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.621803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.630750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.630765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.639403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.639418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.647658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.647672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.656125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.656140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.664206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.664220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.202 [2024-07-26 13:31:00.672783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.202 [2024-07-26 13:31:00.672799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.680989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.681004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.694135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.694151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.701801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.701815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.710800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.710815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.719702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.719717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.728022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.728037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.736750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.736765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.745542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.745557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.754059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.754073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.762775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.762790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.771614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.771629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.780046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.780061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.788610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.788625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.797166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.797181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.805450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.805465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.813754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.813770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.822726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.822741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.831595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.831610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.839928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.839943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.848395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.848410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.857081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.857097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.865904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.865922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.874548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.874563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.883189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.883209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.891913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.891928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.899858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.899872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.908973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.908988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.917270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.917285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.925478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.925493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.464 [2024-07-26 13:31:00.933971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.464 [2024-07-26 13:31:00.933986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.942345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.942360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.950694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.950709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.959505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.959520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.968361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.968376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.977037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.977052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.985764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.985780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:00.994285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:00.994300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.003128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.003143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.011194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.011213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.019534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.019549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.028375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.028393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.036874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.036888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.045320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.045335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.054115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.726 [2024-07-26 13:31:01.054130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.726 [2024-07-26 13:31:01.062187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.062205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.070369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.070383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.079440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.079455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.087560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.087575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.096065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.096080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.104536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.104550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.113247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.113262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.121871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.121886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.130415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.130429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.139031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.139045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.148074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.148089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.156197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.156215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.164979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.164994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.173250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.173264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.182152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.182166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.190871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.190887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.727 [2024-07-26 13:31:01.199607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.727 [2024-07-26 13:31:01.199621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.988 [2024-07-26 13:31:01.208326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.988 [2024-07-26 13:31:01.208341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.988 [2024-07-26 13:31:01.217364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.988 [2024-07-26 13:31:01.217379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.988 [2024-07-26 13:31:01.225710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.988 [2024-07-26 13:31:01.225725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.988 00:20:03.988 Latency(us) 00:20:03.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.988 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:03.988 Nvme1n1 : 5.01 19832.48 154.94 0.00 0.00 6447.96 2307.41 27197.44 00:20:03.988 =================================================================================================================== 00:20:03.988 Total : 19832.48 154.94 0.00 0.00 6447.96 2307.41 27197.44 00:20:03.989 [2024-07-26 13:31:01.231649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.231662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.239666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.239678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.247689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.247699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.255712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.255723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.263728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.263739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.275762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.275773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.287788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.287799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.299816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.299826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.311852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.311862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.323883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.323894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.335910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.335920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 [2024-07-26 13:31:01.347939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.989 [2024-07-26 13:31:01.347952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (973855) - No such process 00:20:03.989 13:31:01 -- target/zcopy.sh@49 -- # wait 973855 00:20:03.989 13:31:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.989 13:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.989 13:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.989 13:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.989 13:31:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:03.989 13:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.989 13:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.989 delay0 00:20:03.989 13:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.989 13:31:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:03.989 13:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.989 13:31:01 -- common/autotest_common.sh@10 -- # set +x 00:20:03.989 13:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.989 13:31:01 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:03.989 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.250 [2024-07-26 13:31:01.490215] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:10.842 Initializing NVMe Controllers 00:20:10.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.842 Initialization complete. Launching workers. 00:20:10.842 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 112 00:20:10.842 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 399, failed to submit 33 00:20:10.842 success 184, unsuccess 215, failed 0 00:20:10.842 13:31:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:10.842 13:31:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:10.842 13:31:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.842 13:31:07 -- nvmf/common.sh@116 -- # sync 00:20:10.842 13:31:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:10.842 13:31:07 -- nvmf/common.sh@119 -- # set +e 00:20:10.842 13:31:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.842 13:31:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:10.842 rmmod nvme_tcp 00:20:10.842 rmmod nvme_fabrics 00:20:10.842 rmmod nvme_keyring 00:20:10.842 13:31:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.842 13:31:07 -- nvmf/common.sh@123 -- # set -e 00:20:10.842 13:31:07 -- nvmf/common.sh@124 -- # return 0 00:20:10.842 13:31:07 -- nvmf/common.sh@477 -- # '[' -n 971463 ']' 00:20:10.842 13:31:07 -- nvmf/common.sh@478 -- # killprocess 971463 00:20:10.842 13:31:07 -- common/autotest_common.sh@926 -- # '[' -z 971463 ']' 00:20:10.842 13:31:07 -- common/autotest_common.sh@930 -- # kill -0 971463 00:20:10.842 13:31:07 -- common/autotest_common.sh@931 -- # uname 00:20:10.842 13:31:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.842 13:31:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 971463 00:20:10.842 13:31:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.842 13:31:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.842 13:31:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 971463' 00:20:10.842 killing process with pid 971463 00:20:10.842 13:31:07 -- common/autotest_common.sh@945 -- # kill 971463 00:20:10.842 13:31:07 -- common/autotest_common.sh@950 -- # wait 971463 00:20:10.842 13:31:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.842 13:31:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.842 13:31:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.842 13:31:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.842 13:31:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.842 13:31:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.842 13:31:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.842 13:31:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.761 13:31:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:12.761 00:20:12.761 real 0m33.017s 00:20:12.761 user 0m44.240s 00:20:12.761 sys 0m10.004s 00:20:12.761 13:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.761 13:31:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 ************************************ 00:20:12.761 END TEST nvmf_zcopy 00:20:12.761 ************************************ 00:20:12.761 13:31:09 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:12.761 13:31:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.761 13:31:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.761 13:31:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 ************************************ 00:20:12.761 START TEST nvmf_nmic 00:20:12.761 ************************************ 00:20:12.762 13:31:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:12.762 * Looking for test storage... 00:20:12.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.762 13:31:10 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.762 13:31:10 -- nvmf/common.sh@7 -- # uname -s 00:20:12.762 13:31:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.762 13:31:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.762 13:31:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.762 13:31:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.762 13:31:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.762 13:31:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.762 13:31:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.762 13:31:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.762 13:31:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.762 13:31:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.762 13:31:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.762 13:31:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.762 13:31:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.762 13:31:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.762 13:31:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.762 13:31:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.762 13:31:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.762 13:31:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.762 13:31:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.762 13:31:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.762 13:31:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.762 13:31:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.762 13:31:10 -- paths/export.sh@5 -- # export PATH 00:20:12.762 13:31:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.762 13:31:10 -- nvmf/common.sh@46 -- # : 0 00:20:12.762 13:31:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.762 13:31:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.762 13:31:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.762 13:31:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.762 13:31:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.762 13:31:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.762 13:31:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.762 13:31:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.762 13:31:10 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.762 13:31:10 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.762 13:31:10 -- target/nmic.sh@14 -- # nvmftestinit 00:20:12.762 13:31:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.762 13:31:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.762 13:31:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.762 13:31:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.762 13:31:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.762 13:31:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.762 13:31:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.762 13:31:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.762 13:31:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:12.762 13:31:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:12.762 13:31:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:12.762 13:31:10 -- common/autotest_common.sh@10 -- # set +x 00:20:20.915 13:31:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:20.915 13:31:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:20.915 13:31:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:20.915 13:31:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:20.915 13:31:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:20.915 13:31:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:20.915 13:31:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:20.915 13:31:16 -- nvmf/common.sh@294 -- # net_devs=() 00:20:20.915 13:31:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:20.915 13:31:16 -- nvmf/common.sh@295 -- # e810=() 00:20:20.915 13:31:16 -- nvmf/common.sh@295 -- # local -ga e810 00:20:20.915 13:31:16 -- nvmf/common.sh@296 -- # x722=() 00:20:20.915 13:31:16 -- nvmf/common.sh@296 -- # local -ga x722 00:20:20.915 13:31:16 -- nvmf/common.sh@297 -- # mlx=() 00:20:20.915 13:31:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:20.915 13:31:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.915 13:31:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.916 13:31:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.916 13:31:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.916 13:31:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.916 13:31:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.916 13:31:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:20.916 13:31:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:20.916 13:31:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:20.916 13:31:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:20.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:20.916 13:31:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:20.916 13:31:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:20.916 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:20.916 13:31:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:20.916 13:31:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.916 13:31:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.916 13:31:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:20.916 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:20.916 13:31:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.916 13:31:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:20.916 13:31:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.916 13:31:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.916 13:31:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:20.916 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:20.916 13:31:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.916 13:31:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:20.916 13:31:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:20.916 13:31:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:20.916 13:31:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.916 13:31:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.916 13:31:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.916 13:31:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:20.916 13:31:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.916 13:31:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.916 13:31:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:20.916 13:31:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.916 13:31:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.916 13:31:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:20.916 13:31:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:20.916 13:31:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.916 13:31:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.916 13:31:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.916 13:31:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.916 13:31:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:20.916 13:31:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.916 13:31:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.916 13:31:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.916 13:31:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:20.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:20:20.916 00:20:20.916 --- 10.0.0.2 ping statistics --- 00:20:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.916 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:20:20.916 13:31:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.481 ms 00:20:20.916 00:20:20.916 --- 10.0.0.1 ping statistics --- 00:20:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.916 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:20:20.916 13:31:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.916 13:31:17 -- nvmf/common.sh@410 -- # return 0 00:20:20.916 13:31:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:20.916 13:31:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.916 13:31:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:20.916 13:31:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:20.916 13:31:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.916 13:31:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:20.916 13:31:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:20.916 13:31:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:20.916 13:31:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:20.916 13:31:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:20.916 13:31:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 13:31:17 -- nvmf/common.sh@469 -- # nvmfpid=980233 00:20:20.916 13:31:17 -- nvmf/common.sh@470 -- # waitforlisten 980233 00:20:20.916 13:31:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.916 13:31:17 -- common/autotest_common.sh@819 -- # '[' -z 980233 ']' 00:20:20.916 13:31:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.916 13:31:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.916 13:31:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.916 13:31:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.916 13:31:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 [2024-07-26 13:31:17.284839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:20.916 [2024-07-26 13:31:17.284906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.916 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.916 [2024-07-26 13:31:17.356298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.916 [2024-07-26 13:31:17.395611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:20.916 [2024-07-26 13:31:17.395756] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.916 [2024-07-26 13:31:17.395766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.916 [2024-07-26 13:31:17.395774] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.916 [2024-07-26 13:31:17.395928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.916 [2024-07-26 13:31:17.396047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.916 [2024-07-26 13:31:17.396231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.916 [2024-07-26 13:31:17.396255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.916 13:31:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.916 13:31:18 -- common/autotest_common.sh@852 -- # return 0 00:20:20.916 13:31:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.916 13:31:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:20.916 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 13:31:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.916 13:31:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.916 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.916 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 [2024-07-26 13:31:18.107531] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.916 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.916 13:31:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:20.916 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.916 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 Malloc0 00:20:20.916 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.916 13:31:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:20.916 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.916 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.916 13:31:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.916 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.916 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.917 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.917 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 [2024-07-26 13:31:18.167000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:20.917 test case1: single bdev can't be used in multiple subsystems 00:20:20.917 13:31:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:20.917 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.917 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:20.917 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.917 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@28 -- # nmic_status=0 00:20:20.917 13:31:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:20.917 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.917 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 [2024-07-26 13:31:18.202920] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:20.917 [2024-07-26 13:31:18.202938] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:20.917 [2024-07-26 13:31:18.202946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.917 request: 00:20:20.917 { 00:20:20.917 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.917 "namespace": { 00:20:20.917 "bdev_name": "Malloc0" 00:20:20.917 }, 00:20:20.917 "method": "nvmf_subsystem_add_ns", 00:20:20.917 "req_id": 1 00:20:20.917 } 00:20:20.917 Got JSON-RPC error response 00:20:20.917 response: 00:20:20.917 { 00:20:20.917 "code": -32602, 00:20:20.917 "message": "Invalid parameters" 00:20:20.917 } 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@29 -- # nmic_status=1 00:20:20.917 13:31:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:20.917 13:31:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:20.917 Adding namespace failed - expected result. 00:20:20.917 13:31:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:20.917 test case2: host connect to nvmf target in multiple paths 00:20:20.917 13:31:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:20.917 13:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.917 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.917 [2024-07-26 13:31:18.215080] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:20.917 13:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.917 13:31:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:22.359 13:31:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:24.275 13:31:21 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:24.275 13:31:21 -- common/autotest_common.sh@1177 -- # local i=0 00:20:24.275 13:31:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:24.275 13:31:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:24.275 13:31:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:26.211 13:31:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:26.211 13:31:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:26.211 13:31:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:26.211 13:31:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:26.211 13:31:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:26.211 13:31:23 -- common/autotest_common.sh@1187 -- # return 0 00:20:26.211 13:31:23 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:26.211 [global] 00:20:26.211 thread=1 00:20:26.211 invalidate=1 00:20:26.211 rw=write 00:20:26.211 time_based=1 00:20:26.211 runtime=1 00:20:26.211 ioengine=libaio 00:20:26.211 direct=1 00:20:26.211 bs=4096 00:20:26.211 iodepth=1 00:20:26.211 norandommap=0 00:20:26.211 numjobs=1 00:20:26.211 00:20:26.211 verify_dump=1 00:20:26.211 verify_backlog=512 00:20:26.211 verify_state_save=0 00:20:26.211 do_verify=1 00:20:26.211 verify=crc32c-intel 00:20:26.211 [job0] 00:20:26.211 filename=/dev/nvme0n1 00:20:26.211 Could not set queue depth (nvme0n1) 00:20:26.470 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:26.470 fio-3.35 00:20:26.470 Starting 1 thread 00:20:27.414 00:20:27.414 job0: (groupid=0, jobs=1): err= 0: pid=981717: Fri Jul 26 13:31:24 2024 00:20:27.414 read: IOPS=305, BW=1222KiB/s (1251kB/s)(1224KiB/1002msec) 00:20:27.414 slat (nsec): min=25833, max=60948, avg=27106.25, stdev=3123.46 00:20:27.414 clat (usec): min=1228, max=1653, avg=1504.33, stdev=68.35 00:20:27.414 lat (usec): min=1255, max=1680, avg=1531.44, stdev=68.27 00:20:27.414 clat percentiles (usec): 00:20:27.414 | 1.00th=[ 1303], 5.00th=[ 1369], 10.00th=[ 1418], 20.00th=[ 1467], 00:20:27.414 | 30.00th=[ 1483], 40.00th=[ 1500], 50.00th=[ 1516], 60.00th=[ 1532], 00:20:27.414 | 70.00th=[ 1549], 80.00th=[ 1565], 90.00th=[ 1565], 95.00th=[ 1582], 00:20:27.414 | 99.00th=[ 1614], 99.50th=[ 1631], 99.90th=[ 1647], 99.95th=[ 1647], 00:20:27.414 | 99.99th=[ 1647] 00:20:27.414 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:27.414 slat (usec): min=11, max=30310, avg=94.56, stdev=1337.98 00:20:27.414 clat (usec): min=706, max=1209, avg=932.28, stdev=56.68 00:20:27.414 lat (usec): min=741, max=31235, avg=1026.84, stdev=1338.86 00:20:27.414 clat percentiles (usec): 00:20:27.414 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 848], 20.00th=[ 889], 00:20:27.414 | 30.00th=[ 930], 40.00th=[ 938], 50.00th=[ 947], 60.00th=[ 955], 00:20:27.414 | 70.00th=[ 963], 80.00th=[ 963], 90.00th=[ 979], 95.00th=[ 1004], 00:20:27.414 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1205], 99.95th=[ 1205], 00:20:27.414 | 99.99th=[ 1205] 00:20:27.414 bw ( KiB/s): min= 240, max= 3856, per=100.00%, avg=2048.00, stdev=2556.90, samples=2 00:20:27.414 iops : min= 60, max= 964, avg=512.00, stdev=639.22, samples=2 00:20:27.414 lat (usec) : 750=0.24%, 1000=59.29% 00:20:27.414 lat (msec) : 2=40.46% 00:20:27.414 cpu : usr=2.20%, sys=3.10%, ctx=822, majf=0, minf=1 00:20:27.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.414 issued rwts: total=306,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.414 00:20:27.414 Run status group 0 (all jobs): 00:20:27.414 READ: bw=1222KiB/s (1251kB/s), 1222KiB/s-1222KiB/s (1251kB/s-1251kB/s), io=1224KiB (1253kB), run=1002-1002msec 00:20:27.414 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:20:27.414 00:20:27.414 Disk stats (read/write): 00:20:27.414 nvme0n1: ios=264/512, merge=0/0, ticks=1331/390, in_queue=1721, util=99.10% 00:20:27.414 13:31:24 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:27.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:27.675 13:31:25 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:27.675 13:31:25 -- common/autotest_common.sh@1198 -- # local i=0 00:20:27.675 13:31:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:27.675 13:31:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.675 13:31:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:27.675 13:31:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.675 13:31:25 -- common/autotest_common.sh@1210 -- # return 0 00:20:27.675 13:31:25 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:27.675 13:31:25 -- target/nmic.sh@53 -- # nvmftestfini 00:20:27.675 13:31:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.675 13:31:25 -- nvmf/common.sh@116 -- # sync 00:20:27.675 13:31:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:27.675 13:31:25 -- nvmf/common.sh@119 -- # set +e 00:20:27.676 13:31:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.676 13:31:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:27.676 rmmod nvme_tcp 00:20:27.676 rmmod nvme_fabrics 00:20:27.676 rmmod nvme_keyring 00:20:27.676 13:31:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.676 13:31:25 -- nvmf/common.sh@123 -- # set -e 00:20:27.676 13:31:25 -- nvmf/common.sh@124 -- # return 0 00:20:27.676 13:31:25 -- nvmf/common.sh@477 -- # '[' -n 980233 ']' 00:20:27.676 13:31:25 -- nvmf/common.sh@478 -- # killprocess 980233 00:20:27.676 13:31:25 -- common/autotest_common.sh@926 -- # '[' -z 980233 ']' 00:20:27.676 13:31:25 -- common/autotest_common.sh@930 -- # kill -0 980233 00:20:27.676 13:31:25 -- common/autotest_common.sh@931 -- # uname 00:20:27.676 13:31:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.676 13:31:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 980233 00:20:27.936 13:31:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:27.936 13:31:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:27.936 13:31:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 980233' 00:20:27.936 killing process with pid 980233 00:20:27.936 13:31:25 -- common/autotest_common.sh@945 -- # kill 980233 00:20:27.936 13:31:25 -- common/autotest_common.sh@950 -- # wait 980233 00:20:27.937 13:31:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:27.937 13:31:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:27.937 13:31:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:27.937 13:31:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.937 13:31:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:27.937 13:31:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.937 13:31:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.937 13:31:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.487 13:31:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:30.487 00:20:30.487 real 0m17.415s 00:20:30.487 user 0m50.301s 00:20:30.487 sys 0m6.047s 00:20:30.487 13:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.487 13:31:27 -- common/autotest_common.sh@10 -- # set +x 00:20:30.487 ************************************ 00:20:30.487 END TEST nvmf_nmic 00:20:30.487 ************************************ 00:20:30.487 13:31:27 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:30.487 13:31:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:30.487 13:31:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:30.487 13:31:27 -- common/autotest_common.sh@10 -- # set +x 00:20:30.487 ************************************ 00:20:30.487 START TEST nvmf_fio_target 00:20:30.487 ************************************ 00:20:30.487 13:31:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:30.487 * Looking for test storage... 00:20:30.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.487 13:31:27 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.487 13:31:27 -- nvmf/common.sh@7 -- # uname -s 00:20:30.487 13:31:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.487 13:31:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.487 13:31:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.487 13:31:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.487 13:31:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.487 13:31:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.487 13:31:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.487 13:31:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.487 13:31:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.487 13:31:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.487 13:31:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.487 13:31:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.487 13:31:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.487 13:31:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.487 13:31:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.487 13:31:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.487 13:31:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.487 13:31:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.487 13:31:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.487 13:31:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.487 13:31:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.488 13:31:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.488 13:31:27 -- paths/export.sh@5 -- # export PATH 00:20:30.488 13:31:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.488 13:31:27 -- nvmf/common.sh@46 -- # : 0 00:20:30.488 13:31:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:30.488 13:31:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:30.488 13:31:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:30.488 13:31:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.488 13:31:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.488 13:31:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:30.488 13:31:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:30.488 13:31:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:30.488 13:31:27 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.488 13:31:27 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.488 13:31:27 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.488 13:31:27 -- target/fio.sh@16 -- # nvmftestinit 00:20:30.488 13:31:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:30.488 13:31:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.488 13:31:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:30.488 13:31:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:30.488 13:31:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:30.488 13:31:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.488 13:31:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.488 13:31:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.488 13:31:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:30.488 13:31:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:30.488 13:31:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:30.488 13:31:27 -- common/autotest_common.sh@10 -- # set +x 00:20:37.081 13:31:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:37.081 13:31:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:37.081 13:31:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:37.081 13:31:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:37.081 13:31:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:37.081 13:31:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:37.081 13:31:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:37.081 13:31:34 -- nvmf/common.sh@294 -- # net_devs=() 00:20:37.081 13:31:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:37.081 13:31:34 -- nvmf/common.sh@295 -- # e810=() 00:20:37.081 13:31:34 -- nvmf/common.sh@295 -- # local -ga e810 00:20:37.081 13:31:34 -- nvmf/common.sh@296 -- # x722=() 00:20:37.081 13:31:34 -- nvmf/common.sh@296 -- # local -ga x722 00:20:37.081 13:31:34 -- nvmf/common.sh@297 -- # mlx=() 00:20:37.081 13:31:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:37.081 13:31:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.081 13:31:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:37.081 13:31:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:37.081 13:31:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:37.081 13:31:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.081 13:31:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:37.081 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:37.081 13:31:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:37.081 13:31:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:37.081 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:37.081 13:31:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:37.081 13:31:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.081 13:31:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.081 13:31:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.081 13:31:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.081 13:31:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:37.081 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:37.081 13:31:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.081 13:31:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:37.081 13:31:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.081 13:31:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:37.081 13:31:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.081 13:31:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:37.081 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:37.081 13:31:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.081 13:31:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:37.081 13:31:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:37.081 13:31:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:37.081 13:31:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:37.082 13:31:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.082 13:31:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.082 13:31:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.082 13:31:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:37.082 13:31:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.082 13:31:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.082 13:31:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:37.082 13:31:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.082 13:31:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.082 13:31:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:37.082 13:31:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:37.082 13:31:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.082 13:31:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.082 13:31:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.082 13:31:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.082 13:31:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:37.082 13:31:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.082 13:31:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.344 13:31:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.344 13:31:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:37.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:20:37.344 00:20:37.344 --- 10.0.0.2 ping statistics --- 00:20:37.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.344 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:20:37.344 13:31:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:20:37.344 00:20:37.344 --- 10.0.0.1 ping statistics --- 00:20:37.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.344 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:20:37.344 13:31:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.344 13:31:34 -- nvmf/common.sh@410 -- # return 0 00:20:37.344 13:31:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:37.344 13:31:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.344 13:31:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:37.344 13:31:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:37.344 13:31:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.344 13:31:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:37.344 13:31:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:37.344 13:31:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:37.344 13:31:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:37.344 13:31:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:37.344 13:31:34 -- common/autotest_common.sh@10 -- # set +x 00:20:37.344 13:31:34 -- nvmf/common.sh@469 -- # nvmfpid=986143 00:20:37.344 13:31:34 -- nvmf/common.sh@470 -- # waitforlisten 986143 00:20:37.344 13:31:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.344 13:31:34 -- common/autotest_common.sh@819 -- # '[' -z 986143 ']' 00:20:37.344 13:31:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.344 13:31:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.344 13:31:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.344 13:31:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.344 13:31:34 -- common/autotest_common.sh@10 -- # set +x 00:20:37.344 [2024-07-26 13:31:34.707850] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:37.344 [2024-07-26 13:31:34.707902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.344 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.344 [2024-07-26 13:31:34.773974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.344 [2024-07-26 13:31:34.803456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:37.344 [2024-07-26 13:31:34.803589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.344 [2024-07-26 13:31:34.803599] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.344 [2024-07-26 13:31:34.803607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.344 [2024-07-26 13:31:34.803750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.344 [2024-07-26 13:31:34.803850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.344 [2024-07-26 13:31:34.804005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.344 [2024-07-26 13:31:34.804007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.306 13:31:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.306 13:31:35 -- common/autotest_common.sh@852 -- # return 0 00:20:38.306 13:31:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:38.306 13:31:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:38.306 13:31:35 -- common/autotest_common.sh@10 -- # set +x 00:20:38.306 13:31:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.306 13:31:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:38.306 [2024-07-26 13:31:35.642009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.306 13:31:35 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.566 13:31:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:38.566 13:31:35 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.566 13:31:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:38.566 13:31:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.828 13:31:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:38.828 13:31:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:39.089 13:31:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:39.089 13:31:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:39.089 13:31:36 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:39.350 13:31:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:39.350 13:31:36 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:39.612 13:31:36 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:39.612 13:31:36 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:39.612 13:31:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:39.612 13:31:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:39.873 13:31:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:40.134 13:31:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:40.134 13:31:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.134 13:31:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:40.134 13:31:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:40.394 13:31:37 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.394 [2024-07-26 13:31:37.859555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.653 13:31:37 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:40.653 13:31:38 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:40.914 13:31:38 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:42.829 13:31:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:42.829 13:31:39 -- common/autotest_common.sh@1177 -- # local i=0 00:20:42.829 13:31:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:42.829 13:31:39 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:42.829 13:31:39 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:42.829 13:31:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:44.746 13:31:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:44.746 13:31:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:44.746 13:31:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:44.746 13:31:41 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:44.746 13:31:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:44.746 13:31:41 -- common/autotest_common.sh@1187 -- # return 0 00:20:44.746 13:31:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:44.746 [global] 00:20:44.746 thread=1 00:20:44.746 invalidate=1 00:20:44.746 rw=write 00:20:44.746 time_based=1 00:20:44.746 runtime=1 00:20:44.746 ioengine=libaio 00:20:44.746 direct=1 00:20:44.746 bs=4096 00:20:44.746 iodepth=1 00:20:44.746 norandommap=0 00:20:44.746 numjobs=1 00:20:44.746 00:20:44.746 verify_dump=1 00:20:44.746 verify_backlog=512 00:20:44.746 verify_state_save=0 00:20:44.746 do_verify=1 00:20:44.746 verify=crc32c-intel 00:20:44.746 [job0] 00:20:44.746 filename=/dev/nvme0n1 00:20:44.746 [job1] 00:20:44.746 filename=/dev/nvme0n2 00:20:44.746 [job2] 00:20:44.746 filename=/dev/nvme0n3 00:20:44.746 [job3] 00:20:44.746 filename=/dev/nvme0n4 00:20:44.746 Could not set queue depth (nvme0n1) 00:20:44.746 Could not set queue depth (nvme0n2) 00:20:44.746 Could not set queue depth (nvme0n3) 00:20:44.746 Could not set queue depth (nvme0n4) 00:20:45.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.005 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.005 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.005 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.005 fio-3.35 00:20:45.005 Starting 4 threads 00:20:46.405 00:20:46.405 job0: (groupid=0, jobs=1): err= 0: pid=987771: Fri Jul 26 13:31:43 2024 00:20:46.405 read: IOPS=10, BW=42.8KiB/s (43.8kB/s)(44.0KiB/1028msec) 00:20:46.405 slat (nsec): min=24949, max=25465, avg=25212.45, stdev=141.97 00:20:46.405 clat (usec): min=41799, max=42356, avg=41979.01, stdev=175.01 00:20:46.405 lat (usec): min=41824, max=42382, avg=42004.23, stdev=175.03 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:46.405 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:46.405 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:46.405 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:46.405 | 99.99th=[42206] 00:20:46.405 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:20:46.405 slat (usec): min=12, max=41692, avg=116.36, stdev=1841.03 00:20:46.405 clat (usec): min=511, max=1961, avg=980.10, stdev=149.98 00:20:46.405 lat (usec): min=545, max=43348, avg=1096.46, stdev=1876.69 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 889], 00:20:46.405 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 996], 00:20:46.405 | 70.00th=[ 1012], 80.00th=[ 1057], 90.00th=[ 1139], 95.00th=[ 1205], 00:20:46.405 | 99.00th=[ 1663], 99.50th=[ 1680], 99.90th=[ 1958], 99.95th=[ 1958], 00:20:46.405 | 99.99th=[ 1958] 00:20:46.405 bw ( KiB/s): min= 512, max= 3584, per=25.70%, avg=2048.00, stdev=2172.23, samples=2 00:20:46.405 iops : min= 128, max= 896, avg=512.00, stdev=543.06, samples=2 00:20:46.405 lat (usec) : 750=3.06%, 1000=61.38% 00:20:46.405 lat (msec) : 2=33.46%, 50=2.10% 00:20:46.405 cpu : usr=1.75%, sys=0.78%, ctx=526, majf=0, minf=1 00:20:46.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 issued rwts: total=11,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.405 job1: (groupid=0, jobs=1): err= 0: pid=987772: Fri Jul 26 13:31:43 2024 00:20:46.405 read: IOPS=412, BW=1650KiB/s (1690kB/s)(1652KiB/1001msec) 00:20:46.405 slat (nsec): min=7873, max=45444, avg=25950.55, stdev=2568.68 00:20:46.405 clat (usec): min=788, max=1310, avg=1137.00, stdev=68.23 00:20:46.405 lat (usec): min=814, max=1336, avg=1162.95, stdev=68.21 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 906], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1090], 00:20:46.405 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:20:46.405 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:20:46.405 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:20:46.405 | 99.99th=[ 1303] 00:20:46.405 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:46.405 slat (usec): min=10, max=44553, avg=119.79, stdev=1967.59 00:20:46.405 clat (usec): min=557, max=1741, avg=879.13, stdev=146.74 00:20:46.405 lat (usec): min=570, max=45571, avg=998.91, stdev=1979.27 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 603], 5.00th=[ 668], 10.00th=[ 701], 20.00th=[ 758], 00:20:46.405 | 30.00th=[ 791], 40.00th=[ 824], 50.00th=[ 881], 60.00th=[ 922], 00:20:46.405 | 70.00th=[ 963], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:20:46.405 | 99.00th=[ 1319], 99.50th=[ 1663], 99.90th=[ 1745], 99.95th=[ 1745], 00:20:46.405 | 99.99th=[ 1745] 00:20:46.405 bw ( KiB/s): min= 3944, max= 3944, per=49.49%, avg=3944.00, stdev= 0.00, samples=1 00:20:46.405 iops : min= 986, max= 986, avg=986.00, stdev= 0.00, samples=1 00:20:46.405 lat (usec) : 750=9.30%, 1000=38.16% 00:20:46.405 lat (msec) : 2=52.54% 00:20:46.405 cpu : usr=1.70%, sys=2.50%, ctx=927, majf=0, minf=1 00:20:46.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 issued rwts: total=413,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.405 job2: (groupid=0, jobs=1): err= 0: pid=987778: Fri Jul 26 13:31:43 2024 00:20:46.405 read: IOPS=354, BW=1419KiB/s (1453kB/s)(1420KiB/1001msec) 00:20:46.405 slat (nsec): min=24999, max=56598, avg=25833.83, stdev=2819.32 00:20:46.405 clat (usec): min=1060, max=1538, avg=1314.45, stdev=81.16 00:20:46.405 lat (usec): min=1086, max=1564, avg=1340.29, stdev=81.38 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 1090], 5.00th=[ 1188], 10.00th=[ 1221], 20.00th=[ 1254], 00:20:46.405 | 30.00th=[ 1270], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1336], 00:20:46.405 | 70.00th=[ 1352], 80.00th=[ 1385], 90.00th=[ 1418], 95.00th=[ 1434], 00:20:46.405 | 99.00th=[ 1516], 99.50th=[ 1532], 99.90th=[ 1532], 99.95th=[ 1532], 00:20:46.405 | 99.99th=[ 1532] 00:20:46.405 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:46.405 slat (usec): min=10, max=4566, avg=46.22, stdev=212.81 00:20:46.405 clat (usec): min=670, max=1224, avg=963.61, stdev=83.68 00:20:46.405 lat (usec): min=684, max=5695, avg=1009.82, stdev=235.57 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 717], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 906], 00:20:46.405 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 988], 00:20:46.405 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:20:46.405 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:20:46.405 | 99.99th=[ 1221] 00:20:46.405 bw ( KiB/s): min= 3912, max= 3912, per=49.09%, avg=3912.00, stdev= 0.00, samples=1 00:20:46.405 iops : min= 978, max= 978, avg=978.00, stdev= 0.00, samples=1 00:20:46.405 lat (usec) : 750=0.92%, 1000=38.99% 00:20:46.405 lat (msec) : 2=60.09% 00:20:46.405 cpu : usr=2.00%, sys=2.10%, ctx=872, majf=0, minf=1 00:20:46.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.405 issued rwts: total=355,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.405 job3: (groupid=0, jobs=1): err= 0: pid=987780: Fri Jul 26 13:31:43 2024 00:20:46.405 read: IOPS=11, BW=47.2KiB/s (48.3kB/s)(48.0KiB/1018msec) 00:20:46.405 slat (nsec): min=25412, max=25901, avg=25629.58, stdev=144.19 00:20:46.405 clat (usec): min=41820, max=42048, avg=41951.00, stdev=72.70 00:20:46.405 lat (usec): min=41846, max=42074, avg=41976.63, stdev=72.65 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:46.405 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:46.405 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:46.405 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:46.405 | 99.99th=[42206] 00:20:46.405 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:20:46.405 slat (nsec): min=10019, max=70712, avg=33522.29, stdev=5102.44 00:20:46.405 clat (usec): min=523, max=1670, avg=961.58, stdev=121.15 00:20:46.405 lat (usec): min=535, max=1704, avg=995.10, stdev=121.36 00:20:46.405 clat percentiles (usec): 00:20:46.405 | 1.00th=[ 611], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 889], 00:20:46.405 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 988], 00:20:46.406 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1123], 00:20:46.406 | 99.00th=[ 1352], 99.50th=[ 1598], 99.90th=[ 1663], 99.95th=[ 1663], 00:20:46.406 | 99.99th=[ 1663] 00:20:46.406 bw ( KiB/s): min= 112, max= 3984, per=25.70%, avg=2048.00, stdev=2737.92, samples=2 00:20:46.406 iops : min= 28, max= 996, avg=512.00, stdev=684.48, samples=2 00:20:46.406 lat (usec) : 750=3.63%, 1000=61.26% 00:20:46.406 lat (msec) : 2=32.82%, 50=2.29% 00:20:46.406 cpu : usr=0.88%, sys=1.57%, ctx=525, majf=0, minf=1 00:20:46.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.406 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:46.406 00:20:46.406 Run status group 0 (all jobs): 00:20:46.406 READ: bw=3078KiB/s (3152kB/s), 42.8KiB/s-1650KiB/s (43.8kB/s-1690kB/s), io=3164KiB (3240kB), run=1001-1028msec 00:20:46.406 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2046KiB/s (2040kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1028msec 00:20:46.406 00:20:46.406 Disk stats (read/write): 00:20:46.406 nvme0n1: ios=52/512, merge=0/0, ticks=524/508, in_queue=1032, util=86.87% 00:20:46.406 nvme0n2: ios=318/512, merge=0/0, ticks=598/434, in_queue=1032, util=90.80% 00:20:46.406 nvme0n3: ios=294/512, merge=0/0, ticks=585/505, in_queue=1090, util=95.03% 00:20:46.406 nvme0n4: ios=57/512, merge=0/0, ticks=608/487, in_queue=1095, util=97.22% 00:20:46.406 13:31:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:46.406 [global] 00:20:46.406 thread=1 00:20:46.406 invalidate=1 00:20:46.406 rw=randwrite 00:20:46.406 time_based=1 00:20:46.406 runtime=1 00:20:46.406 ioengine=libaio 00:20:46.406 direct=1 00:20:46.406 bs=4096 00:20:46.406 iodepth=1 00:20:46.406 norandommap=0 00:20:46.406 numjobs=1 00:20:46.406 00:20:46.406 verify_dump=1 00:20:46.406 verify_backlog=512 00:20:46.406 verify_state_save=0 00:20:46.406 do_verify=1 00:20:46.406 verify=crc32c-intel 00:20:46.406 [job0] 00:20:46.406 filename=/dev/nvme0n1 00:20:46.406 [job1] 00:20:46.406 filename=/dev/nvme0n2 00:20:46.406 [job2] 00:20:46.406 filename=/dev/nvme0n3 00:20:46.406 [job3] 00:20:46.406 filename=/dev/nvme0n4 00:20:46.406 Could not set queue depth (nvme0n1) 00:20:46.406 Could not set queue depth (nvme0n2) 00:20:46.406 Could not set queue depth (nvme0n3) 00:20:46.406 Could not set queue depth (nvme0n4) 00:20:46.742 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:46.742 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:46.742 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:46.742 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:46.742 fio-3.35 00:20:46.742 Starting 4 threads 00:20:47.685 00:20:47.685 job0: (groupid=0, jobs=1): err= 0: pid=988302: Fri Jul 26 13:31:45 2024 00:20:47.685 read: IOPS=11, BW=47.2KiB/s (48.4kB/s)(48.0KiB/1016msec) 00:20:47.685 slat (nsec): min=24522, max=25044, avg=24738.25, stdev=171.22 00:20:47.685 clat (usec): min=41890, max=42088, avg=41977.62, stdev=70.12 00:20:47.685 lat (usec): min=41915, max=42113, avg=42002.36, stdev=70.07 00:20:47.685 clat percentiles (usec): 00:20:47.685 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:47.685 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:47.685 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:47.685 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:47.685 | 99.99th=[42206] 00:20:47.685 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:20:47.685 slat (nsec): min=9696, max=51460, avg=31286.56, stdev=4998.02 00:20:47.685 clat (usec): min=724, max=1162, avg=958.51, stdev=75.76 00:20:47.685 lat (usec): min=756, max=1193, avg=989.80, stdev=76.83 00:20:47.685 clat percentiles (usec): 00:20:47.685 | 1.00th=[ 775], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 898], 00:20:47.685 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:20:47.685 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:20:47.685 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1156], 99.95th=[ 1156], 00:20:47.685 | 99.99th=[ 1156] 00:20:47.685 bw ( KiB/s): min= 96, max= 4000, per=25.92%, avg=2048.00, stdev=2760.54, samples=2 00:20:47.685 iops : min= 24, max= 1000, avg=512.00, stdev=690.14, samples=2 00:20:47.685 lat (usec) : 750=0.57%, 1000=67.56% 00:20:47.685 lat (msec) : 2=29.58%, 50=2.29% 00:20:47.685 cpu : usr=0.99%, sys=1.38%, ctx=527, majf=0, minf=1 00:20:47.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.685 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:47.685 job1: (groupid=0, jobs=1): err= 0: pid=988304: Fri Jul 26 13:31:45 2024 00:20:47.685 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:47.685 slat (nsec): min=8003, max=43498, avg=24948.54, stdev=2933.36 00:20:47.685 clat (usec): min=841, max=1254, avg=1119.49, stdev=65.00 00:20:47.685 lat (usec): min=849, max=1293, avg=1144.44, stdev=65.45 00:20:47.685 clat percentiles (usec): 00:20:47.685 | 1.00th=[ 889], 5.00th=[ 979], 10.00th=[ 1037], 20.00th=[ 1090], 00:20:47.685 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1139], 00:20:47.685 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1205], 00:20:47.685 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:20:47.685 | 99.99th=[ 1254] 00:20:47.685 write: IOPS=515, BW=2062KiB/s (2111kB/s)(2064KiB/1001msec); 0 zone resets 00:20:47.685 slat (nsec): min=9377, max=66929, avg=27627.56, stdev=8699.95 00:20:47.685 clat (usec): min=501, max=957, avg=758.72, stdev=73.05 00:20:47.685 lat (usec): min=511, max=969, avg=786.34, stdev=75.02 00:20:47.685 clat percentiles (usec): 00:20:47.685 | 1.00th=[ 545], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 693], 00:20:47.685 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:20:47.686 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:20:47.686 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 955], 00:20:47.686 | 99.99th=[ 955] 00:20:47.686 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:47.686 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:47.686 lat (usec) : 750=17.80%, 1000=35.80% 00:20:47.686 lat (msec) : 2=46.40% 00:20:47.686 cpu : usr=1.30%, sys=3.00%, ctx=1028, majf=0, minf=1 00:20:47.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 issued rwts: total=512,516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:47.686 job2: (groupid=0, jobs=1): err= 0: pid=988305: Fri Jul 26 13:31:45 2024 00:20:47.686 read: IOPS=14, BW=57.7KiB/s (59.1kB/s)(60.0KiB/1039msec) 00:20:47.686 slat (nsec): min=25448, max=25959, avg=25686.33, stdev=139.51 00:20:47.686 clat (usec): min=41640, max=42041, avg=41948.95, stdev=94.75 00:20:47.686 lat (usec): min=41666, max=42067, avg=41974.63, stdev=94.77 00:20:47.686 clat percentiles (usec): 00:20:47.686 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:47.686 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:47.686 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:47.686 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:47.686 | 99.99th=[42206] 00:20:47.686 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:20:47.686 slat (nsec): min=9778, max=67027, avg=28088.08, stdev=9082.27 00:20:47.686 clat (usec): min=520, max=895, avg=764.04, stdev=57.68 00:20:47.686 lat (usec): min=531, max=914, avg=792.13, stdev=60.51 00:20:47.686 clat percentiles (usec): 00:20:47.686 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 717], 00:20:47.686 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 783], 00:20:47.686 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 840], 00:20:47.686 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 898], 99.95th=[ 898], 00:20:47.686 | 99.99th=[ 898] 00:20:47.686 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:47.686 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:47.686 lat (usec) : 750=26.76%, 1000=70.40% 00:20:47.686 lat (msec) : 50=2.85% 00:20:47.686 cpu : usr=0.58%, sys=1.54%, ctx=527, majf=0, minf=1 00:20:47.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:47.686 job3: (groupid=0, jobs=1): err= 0: pid=988306: Fri Jul 26 13:31:45 2024 00:20:47.686 read: IOPS=14, BW=57.8KiB/s (59.2kB/s)(60.0KiB/1038msec) 00:20:47.686 slat (nsec): min=25165, max=25720, avg=25410.60, stdev=147.89 00:20:47.686 clat (usec): min=41446, max=42065, avg=41937.42, stdev=142.88 00:20:47.686 lat (usec): min=41471, max=42090, avg=41962.83, stdev=142.89 00:20:47.686 clat percentiles (usec): 00:20:47.686 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:47.686 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:47.686 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:47.686 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:47.686 | 99.99th=[42206] 00:20:47.686 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:20:47.686 slat (nsec): min=9691, max=76119, avg=28097.28, stdev=8816.96 00:20:47.686 clat (usec): min=470, max=932, avg=762.40, stdev=63.35 00:20:47.686 lat (usec): min=481, max=945, avg=790.49, stdev=64.99 00:20:47.686 clat percentiles (usec): 00:20:47.686 | 1.00th=[ 578], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 709], 00:20:47.686 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:20:47.686 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 857], 00:20:47.686 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 930], 99.95th=[ 930], 00:20:47.686 | 99.99th=[ 930] 00:20:47.686 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:20:47.686 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:47.686 lat (usec) : 500=0.19%, 750=31.12%, 1000=65.84% 00:20:47.686 lat (msec) : 50=2.85% 00:20:47.686 cpu : usr=0.77%, sys=1.25%, ctx=528, majf=0, minf=1 00:20:47.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.686 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:47.686 00:20:47.686 Run status group 0 (all jobs): 00:20:47.686 READ: bw=2133KiB/s (2184kB/s), 47.2KiB/s-2046KiB/s (48.4kB/s-2095kB/s), io=2216KiB (2269kB), run=1001-1039msec 00:20:47.686 WRITE: bw=7900KiB/s (8090kB/s), 1971KiB/s-2062KiB/s (2018kB/s-2111kB/s), io=8208KiB (8405kB), run=1001-1039msec 00:20:47.686 00:20:47.686 Disk stats (read/write): 00:20:47.686 nvme0n1: ios=39/512, merge=0/0, ticks=1226/491, in_queue=1717, util=99.50% 00:20:47.686 nvme0n2: ios=424/512, merge=0/0, ticks=484/382, in_queue=866, util=90.31% 00:20:47.686 nvme0n3: ios=62/512, merge=0/0, ticks=753/375, in_queue=1128, util=96.73% 00:20:47.686 nvme0n4: ios=10/512, merge=0/0, ticks=419/387, in_queue=806, util=89.43% 00:20:47.947 13:31:45 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:47.947 [global] 00:20:47.947 thread=1 00:20:47.947 invalidate=1 00:20:47.947 rw=write 00:20:47.947 time_based=1 00:20:47.947 runtime=1 00:20:47.947 ioengine=libaio 00:20:47.947 direct=1 00:20:47.947 bs=4096 00:20:47.947 iodepth=128 00:20:47.947 norandommap=0 00:20:47.947 numjobs=1 00:20:47.947 00:20:47.947 verify_dump=1 00:20:47.947 verify_backlog=512 00:20:47.947 verify_state_save=0 00:20:47.947 do_verify=1 00:20:47.947 verify=crc32c-intel 00:20:47.947 [job0] 00:20:47.947 filename=/dev/nvme0n1 00:20:47.947 [job1] 00:20:47.947 filename=/dev/nvme0n2 00:20:47.947 [job2] 00:20:47.947 filename=/dev/nvme0n3 00:20:47.947 [job3] 00:20:47.947 filename=/dev/nvme0n4 00:20:47.947 Could not set queue depth (nvme0n1) 00:20:47.947 Could not set queue depth (nvme0n2) 00:20:47.948 Could not set queue depth (nvme0n3) 00:20:47.948 Could not set queue depth (nvme0n4) 00:20:48.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.209 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.209 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.209 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:48.209 fio-3.35 00:20:48.209 Starting 4 threads 00:20:49.594 00:20:49.594 job0: (groupid=0, jobs=1): err= 0: pid=988832: Fri Jul 26 13:31:46 2024 00:20:49.594 read: IOPS=3303, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1006msec) 00:20:49.594 slat (nsec): min=905, max=12327k, avg=172534.78, stdev=948262.48 00:20:49.594 clat (usec): min=4563, max=47518, avg=21769.33, stdev=9623.22 00:20:49.594 lat (usec): min=7303, max=47545, avg=21941.87, stdev=9717.71 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 7570], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12125], 00:20:49.595 | 30.00th=[13566], 40.00th=[15926], 50.00th=[19006], 60.00th=[24773], 00:20:49.595 | 70.00th=[27919], 80.00th=[31589], 90.00th=[35914], 95.00th=[38536], 00:20:49.595 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[45351], 00:20:49.595 | 99.99th=[47449] 00:20:49.595 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:20:49.595 slat (nsec): min=1603, max=9685.6k, avg=113590.24, stdev=678822.09 00:20:49.595 clat (usec): min=6426, max=41058, avg=15165.31, stdev=5790.10 00:20:49.595 lat (usec): min=6429, max=41081, avg=15278.90, stdev=5832.11 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10945], 00:20:49.595 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13698], 60.00th=[14746], 00:20:49.595 | 70.00th=[15664], 80.00th=[19006], 90.00th=[23987], 95.00th=[28181], 00:20:49.595 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35390], 99.95th=[37487], 00:20:49.595 | 99.99th=[41157] 00:20:49.595 bw ( KiB/s): min=12263, max=16384, per=16.23%, avg=14323.50, stdev=2913.99, samples=2 00:20:49.595 iops : min= 3065, max= 4096, avg=3580.50, stdev=729.03, samples=2 00:20:49.595 lat (msec) : 10=8.41%, 20=60.33%, 50=31.26% 00:20:49.595 cpu : usr=1.99%, sys=3.48%, ctx=308, majf=0, minf=1 00:20:49.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:49.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.595 issued rwts: total=3323,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.595 job1: (groupid=0, jobs=1): err= 0: pid=988835: Fri Jul 26 13:31:46 2024 00:20:49.595 read: IOPS=5466, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1006msec) 00:20:49.595 slat (nsec): min=895, max=43885k, avg=91151.06, stdev=817930.06 00:20:49.595 clat (usec): min=1612, max=55351, avg=11612.60, stdev=6239.18 00:20:49.595 lat (usec): min=2124, max=55366, avg=11703.75, stdev=6282.07 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 3261], 5.00th=[ 4752], 10.00th=[ 6587], 20.00th=[ 8717], 00:20:49.595 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[11731], 00:20:49.595 | 70.00th=[12780], 80.00th=[13960], 90.00th=[15926], 95.00th=[17695], 00:20:49.595 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:20:49.595 | 99.99th=[55313] 00:20:49.595 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:20:49.595 slat (nsec): min=1546, max=25817k, avg=83194.27, stdev=544105.31 00:20:49.595 clat (usec): min=1139, max=52414, avg=10673.87, stdev=5709.20 00:20:49.595 lat (usec): min=1160, max=52422, avg=10757.06, stdev=5744.98 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 1926], 5.00th=[ 4686], 10.00th=[ 6718], 20.00th=[ 7832], 00:20:49.595 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10421], 00:20:49.595 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14877], 95.00th=[19268], 00:20:49.595 | 99.00th=[35914], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:20:49.595 | 99.99th=[52167] 00:20:49.595 bw ( KiB/s): min=21216, max=23840, per=25.52%, avg=22528.00, stdev=1855.45, samples=2 00:20:49.595 iops : min= 5304, max= 5960, avg=5632.00, stdev=463.86, samples=2 00:20:49.595 lat (msec) : 2=0.71%, 4=2.02%, 10=48.67%, 20=45.03%, 50=2.87% 00:20:49.595 lat (msec) : 100=0.71% 00:20:49.595 cpu : usr=1.89%, sys=3.38%, ctx=665, majf=0, minf=1 00:20:49.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:49.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.595 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.595 job2: (groupid=0, jobs=1): err= 0: pid=988836: Fri Jul 26 13:31:46 2024 00:20:49.595 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:20:49.595 slat (nsec): min=917, max=13670k, avg=74298.71, stdev=505992.62 00:20:49.595 clat (usec): min=2383, max=23091, avg=9929.07, stdev=2861.96 00:20:49.595 lat (usec): min=2431, max=24446, avg=10003.37, stdev=2875.49 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 3589], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7767], 00:20:49.595 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10421], 00:20:49.595 | 70.00th=[11207], 80.00th=[11994], 90.00th=[13173], 95.00th=[15533], 00:20:49.595 | 99.00th=[18220], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:20:49.595 | 99.99th=[23200] 00:20:49.595 write: IOPS=6857, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1005msec); 0 zone resets 00:20:49.595 slat (nsec): min=1589, max=6445.5k, avg=61985.89, stdev=379378.89 00:20:49.595 clat (usec): min=1242, max=44756, avg=8883.82, stdev=4739.61 00:20:49.595 lat (usec): min=1260, max=44758, avg=8945.81, stdev=4751.73 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 2311], 5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 5997], 00:20:49.595 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8717], 00:20:49.595 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[13435], 95.00th=[15926], 00:20:49.595 | 99.00th=[31327], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:20:49.595 | 99.99th=[44827] 00:20:49.595 bw ( KiB/s): min=23200, max=30920, per=30.65%, avg=27060.00, stdev=5458.86, samples=2 00:20:49.595 iops : min= 5800, max= 7730, avg=6765.00, stdev=1364.72, samples=2 00:20:49.595 lat (msec) : 2=0.43%, 4=2.59%, 10=62.98%, 20=32.79%, 50=1.21% 00:20:49.595 cpu : usr=3.09%, sys=3.69%, ctx=690, majf=0, minf=1 00:20:49.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:49.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.595 issued rwts: total=6656,6892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.595 job3: (groupid=0, jobs=1): err= 0: pid=988837: Fri Jul 26 13:31:46 2024 00:20:49.595 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:20:49.595 slat (nsec): min=884, max=12128k, avg=82559.53, stdev=511727.83 00:20:49.595 clat (usec): min=6589, max=23046, avg=11187.59, stdev=2537.57 00:20:49.595 lat (usec): min=6593, max=25778, avg=11270.15, stdev=2582.89 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9372], 00:20:49.595 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:20:49.595 | 70.00th=[11600], 80.00th=[12387], 90.00th=[14746], 95.00th=[16450], 00:20:49.595 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22676], 99.95th=[22938], 00:20:49.595 | 99.99th=[22938] 00:20:49.595 write: IOPS=6080, BW=23.8MiB/s (24.9MB/s)(23.8MiB/1002msec); 0 zone resets 00:20:49.595 slat (nsec): min=1534, max=7926.9k, avg=82646.99, stdev=455276.14 00:20:49.595 clat (usec): min=990, max=21895, avg=10528.29, stdev=2990.81 00:20:49.595 lat (usec): min=1238, max=22711, avg=10610.94, stdev=3024.28 00:20:49.595 clat percentiles (usec): 00:20:49.595 | 1.00th=[ 3720], 5.00th=[ 5800], 10.00th=[ 6718], 20.00th=[ 8356], 00:20:49.595 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10683], 60.00th=[11076], 00:20:49.595 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13698], 95.00th=[16581], 00:20:49.595 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21365], 99.95th=[21365], 00:20:49.595 | 99.99th=[21890] 00:20:49.595 bw ( KiB/s): min=23152, max=24576, per=27.03%, avg=23864.00, stdev=1006.92, samples=2 00:20:49.595 iops : min= 5788, max= 6144, avg=5966.00, stdev=251.73, samples=2 00:20:49.595 lat (usec) : 1000=0.01% 00:20:49.595 lat (msec) : 2=0.04%, 4=0.73%, 10=35.82%, 20=62.69%, 50=0.71% 00:20:49.595 cpu : usr=2.90%, sys=3.30%, ctx=699, majf=0, minf=1 00:20:49.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:49.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:49.595 issued rwts: total=5632,6093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:49.595 00:20:49.595 Run status group 0 (all jobs): 00:20:49.595 READ: bw=82.0MiB/s (85.9MB/s), 12.9MiB/s-25.9MiB/s (13.5MB/s-27.1MB/s), io=82.5MiB (86.5MB), run=1002-1006msec 00:20:49.595 WRITE: bw=86.2MiB/s (90.4MB/s), 13.9MiB/s-26.8MiB/s (14.6MB/s-28.1MB/s), io=86.7MiB (90.9MB), run=1002-1006msec 00:20:49.595 00:20:49.595 Disk stats (read/write): 00:20:49.595 nvme0n1: ios=2322/2560, merge=0/0, ticks=19905/12931, in_queue=32836, util=90.38% 00:20:49.595 nvme0n2: ios=4126/4479, merge=0/0, ticks=24387/22176, in_queue=46563, util=96.78% 00:20:49.595 nvme0n3: ios=5522/5632, merge=0/0, ticks=47570/42930, in_queue=90500, util=99.00% 00:20:49.596 nvme0n4: ios=4320/4608, merge=0/0, ticks=24047/26676, in_queue=50723, util=91.40% 00:20:49.596 13:31:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:49.596 [global] 00:20:49.596 thread=1 00:20:49.596 invalidate=1 00:20:49.596 rw=randwrite 00:20:49.596 time_based=1 00:20:49.596 runtime=1 00:20:49.596 ioengine=libaio 00:20:49.596 direct=1 00:20:49.596 bs=4096 00:20:49.596 iodepth=128 00:20:49.596 norandommap=0 00:20:49.596 numjobs=1 00:20:49.596 00:20:49.596 verify_dump=1 00:20:49.596 verify_backlog=512 00:20:49.596 verify_state_save=0 00:20:49.596 do_verify=1 00:20:49.596 verify=crc32c-intel 00:20:49.596 [job0] 00:20:49.596 filename=/dev/nvme0n1 00:20:49.596 [job1] 00:20:49.596 filename=/dev/nvme0n2 00:20:49.596 [job2] 00:20:49.596 filename=/dev/nvme0n3 00:20:49.596 [job3] 00:20:49.596 filename=/dev/nvme0n4 00:20:49.596 Could not set queue depth (nvme0n1) 00:20:49.596 Could not set queue depth (nvme0n2) 00:20:49.596 Could not set queue depth (nvme0n3) 00:20:49.596 Could not set queue depth (nvme0n4) 00:20:49.856 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.856 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.856 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.856 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:49.856 fio-3.35 00:20:49.856 Starting 4 threads 00:20:51.240 00:20:51.240 job0: (groupid=0, jobs=1): err= 0: pid=989357: Fri Jul 26 13:31:48 2024 00:20:51.240 read: IOPS=8410, BW=32.9MiB/s (34.5MB/s)(33.0MiB/1003msec) 00:20:51.240 slat (nsec): min=904, max=9397.2k, avg=57208.39, stdev=393673.56 00:20:51.240 clat (usec): min=1301, max=19917, avg=7379.92, stdev=1924.19 00:20:51.240 lat (usec): min=2532, max=19919, avg=7437.13, stdev=1939.16 00:20:51.240 clat percentiles (usec): 00:20:51.240 | 1.00th=[ 4359], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 6128], 00:20:51.241 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7242], 00:20:51.241 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[10028], 95.00th=[10814], 00:20:51.241 | 99.00th=[13698], 99.50th=[15008], 99.90th=[18744], 99.95th=[19792], 00:20:51.241 | 99.99th=[19792] 00:20:51.241 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:20:51.241 slat (nsec): min=1560, max=5617.7k, avg=56338.35, stdev=296019.68 00:20:51.241 clat (usec): min=1155, max=19912, avg=7441.33, stdev=2976.38 00:20:51.241 lat (usec): min=1165, max=19915, avg=7497.67, stdev=2989.10 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 2737], 5.00th=[ 3621], 10.00th=[ 4359], 20.00th=[ 5669], 00:20:51.241 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6980], 00:20:51.241 | 70.00th=[ 7701], 80.00th=[ 8979], 90.00th=[12125], 95.00th=[14484], 00:20:51.241 | 99.00th=[16450], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:20:51.241 | 99.99th=[19792] 00:20:51.241 bw ( KiB/s): min=32464, max=37168, per=36.00%, avg=34816.00, stdev=3326.23, samples=2 00:20:51.241 iops : min= 8116, max= 9292, avg=8704.00, stdev=831.56, samples=2 00:20:51.241 lat (msec) : 2=0.08%, 4=4.18%, 10=82.71%, 20=13.04% 00:20:51.241 cpu : usr=3.79%, sys=5.39%, ctx=994, majf=0, minf=1 00:20:51.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:51.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.241 issued rwts: total=8436,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.241 job1: (groupid=0, jobs=1): err= 0: pid=989358: Fri Jul 26 13:31:48 2024 00:20:51.241 read: IOPS=2381, BW=9524KiB/s (9753kB/s)(9572KiB/1005msec) 00:20:51.241 slat (nsec): min=888, max=17704k, avg=194306.72, stdev=1242885.82 00:20:51.241 clat (usec): min=3173, max=56973, avg=23655.05, stdev=10957.69 00:20:51.241 lat (usec): min=7098, max=56996, avg=23849.36, stdev=11068.72 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[10945], 20.00th=[14222], 00:20:51.241 | 30.00th=[16581], 40.00th=[19268], 50.00th=[19792], 60.00th=[23200], 00:20:51.241 | 70.00th=[30016], 80.00th=[33424], 90.00th=[40633], 95.00th=[45876], 00:20:51.241 | 99.00th=[50594], 99.50th=[50594], 99.90th=[55313], 99.95th=[56361], 00:20:51.241 | 99.99th=[56886] 00:20:51.241 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:20:51.241 slat (nsec): min=1499, max=18719k, avg=203868.80, stdev=1131540.77 00:20:51.241 clat (usec): min=4363, max=70265, avg=27481.74, stdev=16594.59 00:20:51.241 lat (usec): min=4370, max=70272, avg=27685.61, stdev=16713.80 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[13042], 00:20:51.241 | 30.00th=[17433], 40.00th=[20055], 50.00th=[23987], 60.00th=[26084], 00:20:51.241 | 70.00th=[29754], 80.00th=[40109], 90.00th=[56886], 95.00th=[63701], 00:20:51.241 | 99.00th=[67634], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:20:51.241 | 99.99th=[70779] 00:20:51.241 bw ( KiB/s): min= 8504, max=11976, per=10.59%, avg=10240.00, stdev=2455.07, samples=2 00:20:51.241 iops : min= 2126, max= 2994, avg=2560.00, stdev=613.77, samples=2 00:20:51.241 lat (msec) : 4=0.02%, 10=5.69%, 20=39.67%, 50=45.89%, 100=8.72% 00:20:51.241 cpu : usr=1.29%, sys=2.99%, ctx=242, majf=0, minf=1 00:20:51.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:20:51.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.241 issued rwts: total=2393,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.241 job2: (groupid=0, jobs=1): err= 0: pid=989367: Fri Jul 26 13:31:48 2024 00:20:51.241 read: IOPS=7891, BW=30.8MiB/s (32.3MB/s)(30.9MiB/1004msec) 00:20:51.241 slat (nsec): min=925, max=7739.6k, avg=63358.22, stdev=432655.93 00:20:51.241 clat (usec): min=3154, max=16200, avg=8238.40, stdev=1957.92 00:20:51.241 lat (usec): min=3712, max=16201, avg=8301.76, stdev=1976.37 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6783], 00:20:51.241 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8225], 00:20:51.241 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[12518], 00:20:51.241 | 99.00th=[13829], 99.50th=[14222], 99.90th=[15795], 99.95th=[16057], 00:20:51.241 | 99.99th=[16188] 00:20:51.241 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:20:51.241 slat (nsec): min=1537, max=6403.5k, avg=57890.92, stdev=314888.57 00:20:51.241 clat (usec): min=1633, max=18384, avg=7575.72, stdev=2467.40 00:20:51.241 lat (usec): min=1642, max=18386, avg=7633.61, stdev=2469.82 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 2999], 5.00th=[ 4015], 10.00th=[ 4686], 20.00th=[ 5866], 00:20:51.241 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7635], 00:20:51.241 | 70.00th=[ 7832], 80.00th=[ 8586], 90.00th=[10814], 95.00th=[12649], 00:20:51.241 | 99.00th=[15664], 99.50th=[16188], 99.90th=[17695], 99.95th=[17695], 00:20:51.241 | 99.99th=[18482] 00:20:51.241 bw ( KiB/s): min=32760, max=32776, per=33.88%, avg=32768.00, stdev=11.31, samples=2 00:20:51.241 iops : min= 8190, max= 8194, avg=8192.00, stdev= 2.83, samples=2 00:20:51.241 lat (msec) : 2=0.07%, 4=2.54%, 10=81.80%, 20=15.59% 00:20:51.241 cpu : usr=1.89%, sys=6.38%, ctx=943, majf=0, minf=1 00:20:51.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:51.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.241 issued rwts: total=7923,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.241 job3: (groupid=0, jobs=1): err= 0: pid=989368: Fri Jul 26 13:31:48 2024 00:20:51.241 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:20:51.241 slat (nsec): min=974, max=11253k, avg=105287.51, stdev=737158.86 00:20:51.241 clat (usec): min=6897, max=30087, avg=14032.97, stdev=3644.27 00:20:51.241 lat (usec): min=8480, max=30113, avg=14138.26, stdev=3665.67 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10814], 00:20:51.241 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13173], 60.00th=[14222], 00:20:51.241 | 70.00th=[15401], 80.00th=[16581], 90.00th=[18482], 95.00th=[21103], 00:20:51.241 | 99.00th=[26608], 99.50th=[27657], 99.90th=[29754], 99.95th=[29754], 00:20:51.241 | 99.99th=[30016] 00:20:51.241 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.3MiB/1009msec); 0 zone resets 00:20:51.241 slat (nsec): min=1689, max=10614k, avg=99009.05, stdev=601851.27 00:20:51.241 clat (usec): min=1186, max=33717, avg=12783.46, stdev=4077.72 00:20:51.241 lat (usec): min=1194, max=33721, avg=12882.46, stdev=4072.95 00:20:51.241 clat percentiles (usec): 00:20:51.241 | 1.00th=[ 5604], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[10290], 00:20:51.241 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11863], 60.00th=[12649], 00:20:51.241 | 70.00th=[13829], 80.00th=[15139], 90.00th=[17433], 95.00th=[21103], 00:20:51.241 | 99.00th=[25822], 99.50th=[27657], 99.90th=[33817], 99.95th=[33817], 00:20:51.241 | 99.99th=[33817] 00:20:51.241 bw ( KiB/s): min=19016, max=19504, per=19.91%, avg=19260.00, stdev=345.07, samples=2 00:20:51.241 iops : min= 4754, max= 4876, avg=4815.00, stdev=86.27, samples=2 00:20:51.241 lat (msec) : 2=0.10%, 4=0.14%, 10=10.69%, 20=82.77%, 50=6.29% 00:20:51.241 cpu : usr=2.68%, sys=5.95%, ctx=406, majf=0, minf=1 00:20:51.241 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:51.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:51.241 issued rwts: total=4608,4942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.241 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:51.241 00:20:51.241 Run status group 0 (all jobs): 00:20:51.241 READ: bw=90.4MiB/s (94.8MB/s), 9524KiB/s-32.9MiB/s (9753kB/s-34.5MB/s), io=91.2MiB (95.7MB), run=1003-1009msec 00:20:51.241 WRITE: bw=94.5MiB/s (99.0MB/s), 9.95MiB/s-33.9MiB/s (10.4MB/s-35.5MB/s), io=95.3MiB (99.9MB), run=1003-1009msec 00:20:51.241 00:20:51.241 Disk stats (read/write): 00:20:51.241 nvme0n1: ios=6662/6669, merge=0/0, ticks=47701/47998, in_queue=95699, util=90.28% 00:20:51.241 nvme0n2: ios=1586/2048, merge=0/0, ticks=21155/26793, in_queue=47948, util=85.05% 00:20:51.241 nvme0n3: ios=6049/6144, merge=0/0, ticks=48900/46752, in_queue=95652, util=95.98% 00:20:51.241 nvme0n4: ios=3618/3687, merge=0/0, ticks=51802/42694, in_queue=94496, util=95.52% 00:20:51.241 13:31:48 -- target/fio.sh@55 -- # sync 00:20:51.241 13:31:48 -- target/fio.sh@59 -- # fio_pid=989706 00:20:51.241 13:31:48 -- target/fio.sh@61 -- # sleep 3 00:20:51.241 13:31:48 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:51.241 [global] 00:20:51.241 thread=1 00:20:51.241 invalidate=1 00:20:51.241 rw=read 00:20:51.241 time_based=1 00:20:51.241 runtime=10 00:20:51.241 ioengine=libaio 00:20:51.241 direct=1 00:20:51.241 bs=4096 00:20:51.241 iodepth=1 00:20:51.241 norandommap=1 00:20:51.241 numjobs=1 00:20:51.241 00:20:51.241 [job0] 00:20:51.241 filename=/dev/nvme0n1 00:20:51.241 [job1] 00:20:51.241 filename=/dev/nvme0n2 00:20:51.241 [job2] 00:20:51.241 filename=/dev/nvme0n3 00:20:51.241 [job3] 00:20:51.241 filename=/dev/nvme0n4 00:20:51.531 Could not set queue depth (nvme0n1) 00:20:51.531 Could not set queue depth (nvme0n2) 00:20:51.531 Could not set queue depth (nvme0n3) 00:20:51.531 Could not set queue depth (nvme0n4) 00:20:51.791 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.791 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.791 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.791 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.791 fio-3.35 00:20:51.791 Starting 4 threads 00:20:54.336 13:31:51 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:54.336 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1040384, buflen=4096 00:20:54.336 fio: pid=989893, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:54.336 13:31:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:54.596 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8052736, buflen=4096 00:20:54.596 fio: pid=989892, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:54.596 13:31:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:54.596 13:31:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:54.855 13:31:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:54.856 13:31:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:54.856 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=323584, buflen=4096 00:20:54.856 fio: pid=989890, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:55.116 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11182080, buflen=4096 00:20:55.116 fio: pid=989891, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:55.116 13:31:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.116 13:31:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:55.116 00:20:55.116 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=989890: Fri Jul 26 13:31:52 2024 00:20:55.116 read: IOPS=27, BW=108KiB/s (110kB/s)(316KiB/2939msec) 00:20:55.116 slat (usec): min=7, max=8464, avg=130.06, stdev=943.61 00:20:55.116 clat (usec): min=665, max=42333, avg=36789.46, stdev=13679.13 00:20:55.116 lat (usec): min=677, max=50038, avg=36920.85, stdev=13752.41 00:20:55.116 clat percentiles (usec): 00:20:55.116 | 1.00th=[ 668], 5.00th=[ 848], 10.00th=[ 1221], 20.00th=[41681], 00:20:55.116 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:55.116 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:55.116 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:55.116 | 99.99th=[42206] 00:20:55.116 bw ( KiB/s): min= 96, max= 160, per=1.68%, avg=108.80, stdev=28.62, samples=5 00:20:55.116 iops : min= 24, max= 40, avg=27.20, stdev= 7.16, samples=5 00:20:55.116 lat (usec) : 750=2.50%, 1000=3.75% 00:20:55.116 lat (msec) : 2=5.00%, 4=1.25%, 50=86.25% 00:20:55.116 cpu : usr=0.17%, sys=0.00%, ctx=81, majf=0, minf=1 00:20:55.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.116 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=989891: Fri Jul 26 13:31:52 2024 00:20:55.116 read: IOPS=870, BW=3482KiB/s (3566kB/s)(10.7MiB/3136msec) 00:20:55.116 slat (usec): min=6, max=10513, avg=38.57, stdev=336.96 00:20:55.116 clat (usec): min=592, max=30968, avg=1095.61, stdev=592.26 00:20:55.116 lat (usec): min=618, max=30993, avg=1134.18, stdev=681.73 00:20:55.116 clat percentiles (usec): 00:20:55.116 | 1.00th=[ 742], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 963], 00:20:55.116 | 30.00th=[ 1020], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:20:55.116 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1270], 00:20:55.116 | 99.00th=[ 1500], 99.50th=[ 1549], 99.90th=[ 2900], 99.95th=[ 4228], 00:20:55.116 | 99.99th=[31065] 00:20:55.116 bw ( KiB/s): min= 3268, max= 3672, per=54.86%, avg=3519.33, stdev=179.71, samples=6 00:20:55.116 iops : min= 817, max= 918, avg=879.83, stdev=44.93, samples=6 00:20:55.116 lat (usec) : 750=1.10%, 1000=25.92% 00:20:55.116 lat (msec) : 2=72.79%, 4=0.07%, 10=0.04%, 50=0.04% 00:20:55.116 cpu : usr=1.63%, sys=3.38%, ctx=2736, majf=0, minf=1 00:20:55.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 issued rwts: total=2731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.116 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=989892: Fri Jul 26 13:31:52 2024 00:20:55.116 read: IOPS=713, BW=2854KiB/s (2923kB/s)(7864KiB/2755msec) 00:20:55.116 slat (nsec): min=25826, max=71685, avg=26979.30, stdev=3745.29 00:20:55.116 clat (usec): min=696, max=3803, avg=1357.04, stdev=96.84 00:20:55.116 lat (usec): min=728, max=3829, avg=1384.02, stdev=96.67 00:20:55.116 clat percentiles (usec): 00:20:55.116 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1270], 20.00th=[ 1319], 00:20:55.116 | 30.00th=[ 1336], 40.00th=[ 1352], 50.00th=[ 1369], 60.00th=[ 1369], 00:20:55.116 | 70.00th=[ 1385], 80.00th=[ 1401], 90.00th=[ 1434], 95.00th=[ 1450], 00:20:55.116 | 99.00th=[ 1516], 99.50th=[ 1516], 99.90th=[ 2606], 99.95th=[ 3818], 00:20:55.116 | 99.99th=[ 3818] 00:20:55.116 bw ( KiB/s): min= 2856, max= 2896, per=44.87%, avg=2878.40, stdev=20.71, samples=5 00:20:55.116 iops : min= 714, max= 724, avg=719.60, stdev= 5.18, samples=5 00:20:55.116 lat (usec) : 750=0.10% 00:20:55.116 lat (msec) : 2=99.69%, 4=0.15% 00:20:55.116 cpu : usr=1.27%, sys=2.90%, ctx=1969, majf=0, minf=1 00:20:55.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.116 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.116 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=989893: Fri Jul 26 13:31:52 2024 00:20:55.116 read: IOPS=98, BW=392KiB/s (401kB/s)(1016KiB/2595msec) 00:20:55.116 slat (nsec): min=26185, max=63397, avg=27427.01, stdev=3751.46 00:20:55.116 clat (usec): min=1117, max=45008, avg=10079.68, stdev=16658.51 00:20:55.116 lat (usec): min=1144, max=45039, avg=10107.11, stdev=16658.21 00:20:55.116 clat percentiles (usec): 00:20:55.116 | 1.00th=[ 1156], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1385], 00:20:55.116 | 30.00th=[ 1434], 40.00th=[ 1467], 50.00th=[ 1483], 60.00th=[ 1532], 00:20:55.116 | 70.00th=[ 1582], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:55.116 | 99.00th=[42730], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:20:55.116 | 99.99th=[44827] 00:20:55.116 bw ( KiB/s): min= 96, max= 1288, per=5.21%, avg=334.40, stdev=533.08, samples=5 00:20:55.116 iops : min= 24, max= 322, avg=83.60, stdev=133.27, samples=5 00:20:55.116 lat (msec) : 2=78.43%, 50=21.18% 00:20:55.116 cpu : usr=0.12%, sys=0.46%, ctx=255, majf=0, minf=2 00:20:55.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.117 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.117 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:55.117 00:20:55.117 Run status group 0 (all jobs): 00:20:55.117 READ: bw=6415KiB/s (6568kB/s), 108KiB/s-3482KiB/s (110kB/s-3566kB/s), io=19.6MiB (20.6MB), run=2595-3136msec 00:20:55.117 00:20:55.117 Disk stats (read/write): 00:20:55.117 nvme0n1: ios=77/0, merge=0/0, ticks=2824/0, in_queue=2824, util=94.56% 00:20:55.117 nvme0n2: ios=2709/0, merge=0/0, ticks=2684/0, in_queue=2684, util=94.70% 00:20:55.117 nvme0n3: ios=1861/0, merge=0/0, ticks=2313/0, in_queue=2313, util=96.03% 00:20:55.117 nvme0n4: ios=255/0, merge=0/0, ticks=2541/0, in_queue=2541, util=96.24% 00:20:55.117 13:31:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.117 13:31:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:55.376 13:31:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.376 13:31:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:55.376 13:31:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.376 13:31:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:55.636 13:31:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:55.636 13:31:53 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:55.900 13:31:53 -- target/fio.sh@69 -- # fio_status=0 00:20:55.900 13:31:53 -- target/fio.sh@70 -- # wait 989706 00:20:55.900 13:31:53 -- target/fio.sh@70 -- # fio_status=4 00:20:55.900 13:31:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:55.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.900 13:31:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:55.900 13:31:53 -- common/autotest_common.sh@1198 -- # local i=0 00:20:55.900 13:31:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.900 13:31:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:55.900 13:31:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:55.900 13:31:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:55.900 13:31:53 -- common/autotest_common.sh@1210 -- # return 0 00:20:55.900 13:31:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:55.900 13:31:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:55.900 nvmf hotplug test: fio failed as expected 00:20:55.900 13:31:53 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.160 13:31:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:56.160 13:31:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:56.160 13:31:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:56.160 13:31:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:56.160 13:31:53 -- target/fio.sh@91 -- # nvmftestfini 00:20:56.160 13:31:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:56.160 13:31:53 -- nvmf/common.sh@116 -- # sync 00:20:56.160 13:31:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:56.160 13:31:53 -- nvmf/common.sh@119 -- # set +e 00:20:56.160 13:31:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:56.160 13:31:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:56.160 rmmod nvme_tcp 00:20:56.160 rmmod nvme_fabrics 00:20:56.160 rmmod nvme_keyring 00:20:56.160 13:31:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:56.160 13:31:53 -- nvmf/common.sh@123 -- # set -e 00:20:56.160 13:31:53 -- nvmf/common.sh@124 -- # return 0 00:20:56.160 13:31:53 -- nvmf/common.sh@477 -- # '[' -n 986143 ']' 00:20:56.160 13:31:53 -- nvmf/common.sh@478 -- # killprocess 986143 00:20:56.160 13:31:53 -- common/autotest_common.sh@926 -- # '[' -z 986143 ']' 00:20:56.160 13:31:53 -- common/autotest_common.sh@930 -- # kill -0 986143 00:20:56.160 13:31:53 -- common/autotest_common.sh@931 -- # uname 00:20:56.160 13:31:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:56.160 13:31:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 986143 00:20:56.160 13:31:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:56.160 13:31:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:56.160 13:31:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 986143' 00:20:56.160 killing process with pid 986143 00:20:56.160 13:31:53 -- common/autotest_common.sh@945 -- # kill 986143 00:20:56.160 13:31:53 -- common/autotest_common.sh@950 -- # wait 986143 00:20:56.419 13:31:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:56.419 13:31:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:56.419 13:31:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:56.419 13:31:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.419 13:31:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:56.419 13:31:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.419 13:31:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.419 13:31:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.334 13:31:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:58.334 00:20:58.334 real 0m28.321s 00:20:58.334 user 2m37.234s 00:20:58.334 sys 0m8.953s 00:20:58.334 13:31:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.334 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:20:58.334 ************************************ 00:20:58.334 END TEST nvmf_fio_target 00:20:58.334 ************************************ 00:20:58.596 13:31:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:58.596 13:31:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:58.596 13:31:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:58.596 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:20:58.596 ************************************ 00:20:58.596 START TEST nvmf_bdevio 00:20:58.596 ************************************ 00:20:58.596 13:31:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:58.596 * Looking for test storage... 00:20:58.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.597 13:31:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.597 13:31:55 -- nvmf/common.sh@7 -- # uname -s 00:20:58.597 13:31:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.597 13:31:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.597 13:31:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.597 13:31:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.597 13:31:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.597 13:31:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.597 13:31:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.597 13:31:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.597 13:31:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.597 13:31:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.597 13:31:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.597 13:31:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.597 13:31:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.597 13:31:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.597 13:31:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.597 13:31:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.597 13:31:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.597 13:31:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.597 13:31:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.597 13:31:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.597 13:31:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.597 13:31:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.597 13:31:55 -- paths/export.sh@5 -- # export PATH 00:20:58.597 13:31:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.597 13:31:55 -- nvmf/common.sh@46 -- # : 0 00:20:58.597 13:31:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:58.597 13:31:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:58.597 13:31:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:58.597 13:31:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.597 13:31:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.597 13:31:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:58.597 13:31:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:58.597 13:31:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:58.597 13:31:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.597 13:31:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.597 13:31:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:58.597 13:31:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:58.597 13:31:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.597 13:31:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:58.597 13:31:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:58.597 13:31:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:58.597 13:31:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.597 13:31:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.597 13:31:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.597 13:31:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:58.597 13:31:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:58.597 13:31:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:58.597 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:21:06.746 13:32:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:06.746 13:32:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:06.746 13:32:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:06.746 13:32:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:06.746 13:32:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:06.746 13:32:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:06.746 13:32:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:06.746 13:32:02 -- nvmf/common.sh@294 -- # net_devs=() 00:21:06.746 13:32:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:06.746 13:32:02 -- nvmf/common.sh@295 -- # e810=() 00:21:06.746 13:32:02 -- nvmf/common.sh@295 -- # local -ga e810 00:21:06.746 13:32:02 -- nvmf/common.sh@296 -- # x722=() 00:21:06.746 13:32:02 -- nvmf/common.sh@296 -- # local -ga x722 00:21:06.746 13:32:02 -- nvmf/common.sh@297 -- # mlx=() 00:21:06.746 13:32:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:06.746 13:32:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.746 13:32:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:06.746 13:32:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:06.746 13:32:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.746 13:32:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:06.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:06.746 13:32:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:06.746 13:32:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:06.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:06.746 13:32:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.746 13:32:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.746 13:32:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.746 13:32:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:06.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:06.746 13:32:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.746 13:32:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:06.746 13:32:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.746 13:32:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.746 13:32:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:06.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:06.746 13:32:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.746 13:32:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:06.746 13:32:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:06.746 13:32:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:06.746 13:32:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.746 13:32:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.746 13:32:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.746 13:32:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:06.746 13:32:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.746 13:32:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.746 13:32:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:06.746 13:32:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.746 13:32:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.746 13:32:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:06.746 13:32:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:06.746 13:32:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.746 13:32:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.746 13:32:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.746 13:32:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.746 13:32:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:06.746 13:32:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.746 13:32:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.746 13:32:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.746 13:32:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:06.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:21:06.746 00:21:06.746 --- 10.0.0.2 ping statistics --- 00:21:06.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.746 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:21:06.746 13:32:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:21:06.746 00:21:06.747 --- 10.0.0.1 ping statistics --- 00:21:06.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.747 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:21:06.747 13:32:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.747 13:32:03 -- nvmf/common.sh@410 -- # return 0 00:21:06.747 13:32:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:06.747 13:32:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.747 13:32:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:06.747 13:32:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:06.747 13:32:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.747 13:32:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:06.747 13:32:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:06.747 13:32:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:06.747 13:32:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:06.747 13:32:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:06.747 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 13:32:03 -- nvmf/common.sh@469 -- # nvmfpid=994946 00:21:06.747 13:32:03 -- nvmf/common.sh@470 -- # waitforlisten 994946 00:21:06.747 13:32:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:06.747 13:32:03 -- common/autotest_common.sh@819 -- # '[' -z 994946 ']' 00:21:06.747 13:32:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.747 13:32:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.747 13:32:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.747 13:32:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.747 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 [2024-07-26 13:32:03.134391] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:06.747 [2024-07-26 13:32:03.134445] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.747 [2024-07-26 13:32:03.217659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.747 [2024-07-26 13:32:03.246842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:06.747 [2024-07-26 13:32:03.246977] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.747 [2024-07-26 13:32:03.246987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.747 [2024-07-26 13:32:03.246995] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.747 [2024-07-26 13:32:03.247136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.747 [2024-07-26 13:32:03.247300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:06.747 [2024-07-26 13:32:03.247580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:06.747 [2024-07-26 13:32:03.247581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.747 13:32:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.747 13:32:03 -- common/autotest_common.sh@852 -- # return 0 00:21:06.747 13:32:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:06.747 13:32:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:06.747 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 13:32:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.747 13:32:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.747 13:32:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.747 13:32:03 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 [2024-07-26 13:32:04.004122] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.747 13:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.747 13:32:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:06.747 13:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.747 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 Malloc0 00:21:06.747 13:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.747 13:32:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.747 13:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.747 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 13:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.747 13:32:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:06.747 13:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.747 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 13:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.747 13:32:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.747 13:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.747 13:32:04 -- common/autotest_common.sh@10 -- # set +x 00:21:06.747 [2024-07-26 13:32:04.069701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.747 13:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.747 13:32:04 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:06.747 13:32:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:06.747 13:32:04 -- nvmf/common.sh@520 -- # config=() 00:21:06.747 13:32:04 -- nvmf/common.sh@520 -- # local subsystem config 00:21:06.747 13:32:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:06.747 13:32:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:06.747 { 00:21:06.747 "params": { 00:21:06.747 "name": "Nvme$subsystem", 00:21:06.747 "trtype": "$TEST_TRANSPORT", 00:21:06.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.747 "adrfam": "ipv4", 00:21:06.747 "trsvcid": "$NVMF_PORT", 00:21:06.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.747 "hdgst": ${hdgst:-false}, 00:21:06.747 "ddgst": ${ddgst:-false} 00:21:06.747 }, 00:21:06.747 "method": "bdev_nvme_attach_controller" 00:21:06.747 } 00:21:06.747 EOF 00:21:06.747 )") 00:21:06.747 13:32:04 -- nvmf/common.sh@542 -- # cat 00:21:06.747 13:32:04 -- nvmf/common.sh@544 -- # jq . 00:21:06.747 13:32:04 -- nvmf/common.sh@545 -- # IFS=, 00:21:06.747 13:32:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:06.747 "params": { 00:21:06.747 "name": "Nvme1", 00:21:06.747 "trtype": "tcp", 00:21:06.747 "traddr": "10.0.0.2", 00:21:06.747 "adrfam": "ipv4", 00:21:06.747 "trsvcid": "4420", 00:21:06.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.747 "hdgst": false, 00:21:06.747 "ddgst": false 00:21:06.747 }, 00:21:06.747 "method": "bdev_nvme_attach_controller" 00:21:06.747 }' 00:21:06.747 [2024-07-26 13:32:04.120966] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:06.747 [2024-07-26 13:32:04.121035] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995168 ] 00:21:06.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.747 [2024-07-26 13:32:04.187767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:07.012 [2024-07-26 13:32:04.226636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.012 [2024-07-26 13:32:04.226758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.012 [2024-07-26 13:32:04.226760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.012 [2024-07-26 13:32:04.401584] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:07.012 [2024-07-26 13:32:04.401617] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:07.012 I/O targets: 00:21:07.012 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:07.012 00:21:07.012 00:21:07.012 CUnit - A unit testing framework for C - Version 2.1-3 00:21:07.012 http://cunit.sourceforge.net/ 00:21:07.012 00:21:07.012 00:21:07.012 Suite: bdevio tests on: Nvme1n1 00:21:07.012 Test: blockdev write read block ...passed 00:21:07.336 Test: blockdev write zeroes read block ...passed 00:21:07.336 Test: blockdev write zeroes read no split ...passed 00:21:07.336 Test: blockdev write zeroes read split ...passed 00:21:07.336 Test: blockdev write zeroes read split partial ...passed 00:21:07.336 Test: blockdev reset ...[2024-07-26 13:32:04.598745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.336 [2024-07-26 13:32:04.598809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ef480 (9): Bad file descriptor 00:21:07.336 [2024-07-26 13:32:04.619873] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.336 passed 00:21:07.336 Test: blockdev write read 8 blocks ...passed 00:21:07.336 Test: blockdev write read size > 128k ...passed 00:21:07.336 Test: blockdev write read invalid size ...passed 00:21:07.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:07.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:07.336 Test: blockdev write read max offset ...passed 00:21:07.596 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:07.596 Test: blockdev writev readv 8 blocks ...passed 00:21:07.596 Test: blockdev writev readv 30 x 1block ...passed 00:21:07.596 Test: blockdev writev readv block ...passed 00:21:07.596 Test: blockdev writev readv size > 128k ...passed 00:21:07.596 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:07.596 Test: blockdev comparev and writev ...[2024-07-26 13:32:04.893598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.893623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.596 [2024-07-26 13:32:04.893635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.893641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:07.596 [2024-07-26 13:32:04.894303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.894311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:07.596 [2024-07-26 13:32:04.894321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.894326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:07.596 [2024-07-26 13:32:04.894976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.894983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:07.596 [2024-07-26 13:32:04.894993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.596 [2024-07-26 13:32:04.894998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:07.597 [2024-07-26 13:32:04.895612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.597 [2024-07-26 13:32:04.895620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:07.597 [2024-07-26 13:32:04.895629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:07.597 [2024-07-26 13:32:04.895635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:07.597 passed 00:21:07.597 Test: blockdev nvme passthru rw ...passed 00:21:07.597 Test: blockdev nvme passthru vendor specific ...[2024-07-26 13:32:04.980357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.597 [2024-07-26 13:32:04.980367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:07.597 [2024-07-26 13:32:04.980886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.597 [2024-07-26 13:32:04.980893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:07.597 [2024-07-26 13:32:04.981403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.597 [2024-07-26 13:32:04.981411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:07.597 [2024-07-26 13:32:04.981923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.597 [2024-07-26 13:32:04.981931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:07.597 passed 00:21:07.597 Test: blockdev nvme admin passthru ...passed 00:21:07.597 Test: blockdev copy ...passed 00:21:07.597 00:21:07.597 Run Summary: Type Total Ran Passed Failed Inactive 00:21:07.597 suites 1 1 n/a 0 0 00:21:07.597 tests 23 23 23 0 0 00:21:07.597 asserts 152 152 152 0 n/a 00:21:07.597 00:21:07.597 Elapsed time = 1.287 seconds 00:21:07.858 13:32:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.858 13:32:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.858 13:32:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.858 13:32:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.858 13:32:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:07.858 13:32:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:07.858 13:32:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:07.858 13:32:05 -- nvmf/common.sh@116 -- # sync 00:21:07.858 13:32:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:07.858 13:32:05 -- nvmf/common.sh@119 -- # set +e 00:21:07.858 13:32:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:07.858 13:32:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:07.858 rmmod nvme_tcp 00:21:07.858 rmmod nvme_fabrics 00:21:07.858 rmmod nvme_keyring 00:21:07.858 13:32:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:07.858 13:32:05 -- nvmf/common.sh@123 -- # set -e 00:21:07.858 13:32:05 -- nvmf/common.sh@124 -- # return 0 00:21:07.858 13:32:05 -- nvmf/common.sh@477 -- # '[' -n 994946 ']' 00:21:07.858 13:32:05 -- nvmf/common.sh@478 -- # killprocess 994946 00:21:07.858 13:32:05 -- common/autotest_common.sh@926 -- # '[' -z 994946 ']' 00:21:07.858 13:32:05 -- common/autotest_common.sh@930 -- # kill -0 994946 00:21:07.858 13:32:05 -- common/autotest_common.sh@931 -- # uname 00:21:07.858 13:32:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:07.858 13:32:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 994946 00:21:07.858 13:32:05 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:07.858 13:32:05 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:07.858 13:32:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 994946' 00:21:07.858 killing process with pid 994946 00:21:07.858 13:32:05 -- common/autotest_common.sh@945 -- # kill 994946 00:21:07.858 13:32:05 -- common/autotest_common.sh@950 -- # wait 994946 00:21:08.119 13:32:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:08.119 13:32:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:08.119 13:32:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:08.119 13:32:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.119 13:32:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:08.119 13:32:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.119 13:32:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.119 13:32:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.036 13:32:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:10.036 00:21:10.036 real 0m11.665s 00:21:10.036 user 0m12.720s 00:21:10.036 sys 0m5.832s 00:21:10.036 13:32:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.036 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:21:10.036 ************************************ 00:21:10.036 END TEST nvmf_bdevio 00:21:10.036 ************************************ 00:21:10.297 13:32:07 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:10.297 13:32:07 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.297 13:32:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:10.297 13:32:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.297 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:21:10.297 ************************************ 00:21:10.297 START TEST nvmf_bdevio_no_huge 00:21:10.297 ************************************ 00:21:10.297 13:32:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.297 * Looking for test storage... 00:21:10.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.297 13:32:07 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.297 13:32:07 -- nvmf/common.sh@7 -- # uname -s 00:21:10.297 13:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.297 13:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.297 13:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.297 13:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.297 13:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.297 13:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.297 13:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.298 13:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.298 13:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.298 13:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.298 13:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.298 13:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.298 13:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.298 13:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.298 13:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.298 13:32:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.298 13:32:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.298 13:32:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.298 13:32:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.298 13:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.298 13:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.298 13:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.298 13:32:07 -- paths/export.sh@5 -- # export PATH 00:21:10.298 13:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.298 13:32:07 -- nvmf/common.sh@46 -- # : 0 00:21:10.298 13:32:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:10.298 13:32:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:10.298 13:32:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:10.298 13:32:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.298 13:32:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.298 13:32:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:10.298 13:32:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:10.298 13:32:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:10.298 13:32:07 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.298 13:32:07 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.298 13:32:07 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:10.298 13:32:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:10.298 13:32:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.298 13:32:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:10.298 13:32:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:10.298 13:32:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:10.298 13:32:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.298 13:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.298 13:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.298 13:32:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:10.298 13:32:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:10.298 13:32:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:10.298 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:21:16.888 13:32:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:16.888 13:32:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:16.888 13:32:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:16.888 13:32:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:16.888 13:32:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:16.888 13:32:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:16.888 13:32:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:16.888 13:32:14 -- nvmf/common.sh@294 -- # net_devs=() 00:21:16.888 13:32:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:16.888 13:32:14 -- nvmf/common.sh@295 -- # e810=() 00:21:16.888 13:32:14 -- nvmf/common.sh@295 -- # local -ga e810 00:21:16.888 13:32:14 -- nvmf/common.sh@296 -- # x722=() 00:21:16.888 13:32:14 -- nvmf/common.sh@296 -- # local -ga x722 00:21:16.888 13:32:14 -- nvmf/common.sh@297 -- # mlx=() 00:21:16.888 13:32:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:16.888 13:32:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.888 13:32:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:16.888 13:32:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:16.888 13:32:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:16.888 13:32:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:16.888 13:32:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:16.888 13:32:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:16.888 13:32:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:16.888 13:32:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:16.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:16.888 13:32:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:17.150 13:32:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:17.150 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:17.150 13:32:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:17.150 13:32:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:17.150 13:32:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.150 13:32:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:17.150 13:32:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.150 13:32:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:17.150 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:17.150 13:32:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.150 13:32:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:17.150 13:32:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.150 13:32:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:17.150 13:32:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.150 13:32:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:17.150 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:17.150 13:32:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.150 13:32:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:17.150 13:32:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:17.150 13:32:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:17.150 13:32:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:17.150 13:32:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.150 13:32:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.150 13:32:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.150 13:32:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:17.150 13:32:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.150 13:32:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.150 13:32:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:17.150 13:32:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.150 13:32:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.150 13:32:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:17.150 13:32:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:17.150 13:32:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.150 13:32:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.150 13:32:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.150 13:32:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.150 13:32:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:17.150 13:32:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.412 13:32:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.412 13:32:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.412 13:32:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:17.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:21:17.412 00:21:17.412 --- 10.0.0.2 ping statistics --- 00:21:17.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.412 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:21:17.412 13:32:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:21:17.412 00:21:17.412 --- 10.0.0.1 ping statistics --- 00:21:17.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.412 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:17.412 13:32:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.412 13:32:14 -- nvmf/common.sh@410 -- # return 0 00:21:17.412 13:32:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:17.413 13:32:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.413 13:32:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:17.413 13:32:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:17.413 13:32:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.413 13:32:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:17.413 13:32:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:17.413 13:32:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:17.413 13:32:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:17.413 13:32:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:17.413 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:21:17.413 13:32:14 -- nvmf/common.sh@469 -- # nvmfpid=999534 00:21:17.413 13:32:14 -- nvmf/common.sh@470 -- # waitforlisten 999534 00:21:17.413 13:32:14 -- common/autotest_common.sh@819 -- # '[' -z 999534 ']' 00:21:17.413 13:32:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:17.413 13:32:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.413 13:32:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:17.413 13:32:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.413 13:32:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:17.413 13:32:14 -- common/autotest_common.sh@10 -- # set +x 00:21:17.413 [2024-07-26 13:32:14.802044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:17.413 [2024-07-26 13:32:14.802118] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:17.674 [2024-07-26 13:32:14.897353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.674 [2024-07-26 13:32:14.975370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:17.674 [2024-07-26 13:32:14.975524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.674 [2024-07-26 13:32:14.975532] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.674 [2024-07-26 13:32:14.975540] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.674 [2024-07-26 13:32:14.975703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.674 [2024-07-26 13:32:14.975867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:17.674 [2024-07-26 13:32:14.976060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.674 [2024-07-26 13:32:14.976061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:18.247 13:32:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.247 13:32:15 -- common/autotest_common.sh@852 -- # return 0 00:21:18.247 13:32:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:18.247 13:32:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 13:32:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.247 13:32:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.247 13:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 [2024-07-26 13:32:15.648716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.247 13:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.247 13:32:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:18.247 13:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 Malloc0 00:21:18.247 13:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.247 13:32:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.247 13:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 13:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.247 13:32:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.247 13:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 13:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.247 13:32:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.247 13:32:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:18.247 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.247 [2024-07-26 13:32:15.702261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.247 13:32:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:18.247 13:32:15 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:18.247 13:32:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:18.247 13:32:15 -- nvmf/common.sh@520 -- # config=() 00:21:18.247 13:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:21:18.248 13:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:18.248 13:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:18.248 { 00:21:18.248 "params": { 00:21:18.248 "name": "Nvme$subsystem", 00:21:18.248 "trtype": "$TEST_TRANSPORT", 00:21:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.248 "adrfam": "ipv4", 00:21:18.248 "trsvcid": "$NVMF_PORT", 00:21:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.248 "hdgst": ${hdgst:-false}, 00:21:18.248 "ddgst": ${ddgst:-false} 00:21:18.248 }, 00:21:18.248 "method": "bdev_nvme_attach_controller" 00:21:18.248 } 00:21:18.248 EOF 00:21:18.248 )") 00:21:18.248 13:32:15 -- nvmf/common.sh@542 -- # cat 00:21:18.248 13:32:15 -- nvmf/common.sh@544 -- # jq . 00:21:18.509 13:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:21:18.509 13:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:18.509 "params": { 00:21:18.509 "name": "Nvme1", 00:21:18.509 "trtype": "tcp", 00:21:18.509 "traddr": "10.0.0.2", 00:21:18.509 "adrfam": "ipv4", 00:21:18.509 "trsvcid": "4420", 00:21:18.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.509 "hdgst": false, 00:21:18.509 "ddgst": false 00:21:18.509 }, 00:21:18.509 "method": "bdev_nvme_attach_controller" 00:21:18.509 }' 00:21:18.509 [2024-07-26 13:32:15.754758] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:18.509 [2024-07-26 13:32:15.754827] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid999688 ] 00:21:18.509 [2024-07-26 13:32:15.821559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:18.509 [2024-07-26 13:32:15.890659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.509 [2024-07-26 13:32:15.890787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.509 [2024-07-26 13:32:15.890790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.771 [2024-07-26 13:32:16.065232] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:18.771 [2024-07-26 13:32:16.065257] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:18.771 I/O targets: 00:21:18.771 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:18.771 00:21:18.771 00:21:18.771 CUnit - A unit testing framework for C - Version 2.1-3 00:21:18.771 http://cunit.sourceforge.net/ 00:21:18.771 00:21:18.771 00:21:18.771 Suite: bdevio tests on: Nvme1n1 00:21:18.771 Test: blockdev write read block ...passed 00:21:18.771 Test: blockdev write zeroes read block ...passed 00:21:18.771 Test: blockdev write zeroes read no split ...passed 00:21:18.771 Test: blockdev write zeroes read split ...passed 00:21:18.771 Test: blockdev write zeroes read split partial ...passed 00:21:18.771 Test: blockdev reset ...[2024-07-26 13:32:16.231299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.771 [2024-07-26 13:32:16.231363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61ca10 (9): Bad file descriptor 00:21:19.033 [2024-07-26 13:32:16.251130] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.033 passed 00:21:19.033 Test: blockdev write read 8 blocks ...passed 00:21:19.033 Test: blockdev write read size > 128k ...passed 00:21:19.033 Test: blockdev write read invalid size ...passed 00:21:19.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:19.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:19.033 Test: blockdev write read max offset ...passed 00:21:19.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:19.033 Test: blockdev writev readv 8 blocks ...passed 00:21:19.033 Test: blockdev writev readv 30 x 1block ...passed 00:21:19.033 Test: blockdev writev readv block ...passed 00:21:19.294 Test: blockdev writev readv size > 128k ...passed 00:21:19.294 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:19.294 Test: blockdev comparev and writev ...[2024-07-26 13:32:16.512525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.512550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.512560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.512566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.512973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.512981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.512991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.513410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.513418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.513427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.513863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.513872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.294 [2024-07-26 13:32:16.513887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.294 passed 00:21:19.294 Test: blockdev nvme passthru rw ...passed 00:21:19.294 Test: blockdev nvme passthru vendor specific ...[2024-07-26 13:32:16.596700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.294 [2024-07-26 13:32:16.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.596958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.294 [2024-07-26 13:32:16.596965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.597243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.294 [2024-07-26 13:32:16.597251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.294 [2024-07-26 13:32:16.597507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.294 [2024-07-26 13:32:16.597514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.294 passed 00:21:19.294 Test: blockdev nvme admin passthru ...passed 00:21:19.294 Test: blockdev copy ...passed 00:21:19.294 00:21:19.294 Run Summary: Type Total Ran Passed Failed Inactive 00:21:19.294 suites 1 1 n/a 0 0 00:21:19.294 tests 23 23 23 0 0 00:21:19.294 asserts 152 152 152 0 n/a 00:21:19.294 00:21:19.294 Elapsed time = 1.186 seconds 00:21:19.556 13:32:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.556 13:32:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:19.556 13:32:16 -- common/autotest_common.sh@10 -- # set +x 00:21:19.556 13:32:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:19.556 13:32:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:19.556 13:32:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:19.556 13:32:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:19.556 13:32:16 -- nvmf/common.sh@116 -- # sync 00:21:19.556 13:32:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:19.556 13:32:16 -- nvmf/common.sh@119 -- # set +e 00:21:19.556 13:32:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:19.556 13:32:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:19.556 rmmod nvme_tcp 00:21:19.556 rmmod nvme_fabrics 00:21:19.556 rmmod nvme_keyring 00:21:19.556 13:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:19.556 13:32:16 -- nvmf/common.sh@123 -- # set -e 00:21:19.556 13:32:16 -- nvmf/common.sh@124 -- # return 0 00:21:19.556 13:32:16 -- nvmf/common.sh@477 -- # '[' -n 999534 ']' 00:21:19.556 13:32:16 -- nvmf/common.sh@478 -- # killprocess 999534 00:21:19.556 13:32:16 -- common/autotest_common.sh@926 -- # '[' -z 999534 ']' 00:21:19.556 13:32:16 -- common/autotest_common.sh@930 -- # kill -0 999534 00:21:19.556 13:32:16 -- common/autotest_common.sh@931 -- # uname 00:21:19.556 13:32:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:19.556 13:32:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999534 00:21:19.817 13:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:19.817 13:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:19.817 13:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999534' 00:21:19.817 killing process with pid 999534 00:21:19.817 13:32:17 -- common/autotest_common.sh@945 -- # kill 999534 00:21:19.817 13:32:17 -- common/autotest_common.sh@950 -- # wait 999534 00:21:20.078 13:32:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:20.078 13:32:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:20.078 13:32:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:20.078 13:32:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.078 13:32:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:20.078 13:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.078 13:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.078 13:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.996 13:32:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:21.996 00:21:21.996 real 0m11.867s 00:21:21.996 user 0m13.226s 00:21:21.996 sys 0m6.180s 00:21:21.996 13:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.996 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:21:21.996 ************************************ 00:21:21.996 END TEST nvmf_bdevio_no_huge 00:21:21.996 ************************************ 00:21:21.996 13:32:19 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:21.996 13:32:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:21.996 13:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:21.996 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:21:21.997 ************************************ 00:21:21.997 START TEST nvmf_tls 00:21:21.997 ************************************ 00:21:21.997 13:32:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:22.258 * Looking for test storage... 00:21:22.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.258 13:32:19 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.258 13:32:19 -- nvmf/common.sh@7 -- # uname -s 00:21:22.258 13:32:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.258 13:32:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.258 13:32:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.258 13:32:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.258 13:32:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.258 13:32:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.258 13:32:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.258 13:32:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.258 13:32:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.258 13:32:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.258 13:32:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.258 13:32:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.258 13:32:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.258 13:32:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.258 13:32:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.258 13:32:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.258 13:32:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.258 13:32:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.258 13:32:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.258 13:32:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.258 13:32:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.258 13:32:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.258 13:32:19 -- paths/export.sh@5 -- # export PATH 00:21:22.258 13:32:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.258 13:32:19 -- nvmf/common.sh@46 -- # : 0 00:21:22.258 13:32:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:22.258 13:32:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:22.258 13:32:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:22.258 13:32:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.258 13:32:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.258 13:32:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:22.258 13:32:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:22.258 13:32:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:22.258 13:32:19 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.258 13:32:19 -- target/tls.sh@71 -- # nvmftestinit 00:21:22.258 13:32:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:22.258 13:32:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.258 13:32:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:22.258 13:32:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:22.258 13:32:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:22.258 13:32:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.258 13:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.258 13:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.258 13:32:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:22.258 13:32:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:22.258 13:32:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:22.258 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:21:30.411 13:32:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:30.411 13:32:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:30.411 13:32:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:30.411 13:32:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:30.411 13:32:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:30.411 13:32:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:30.411 13:32:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:30.411 13:32:26 -- nvmf/common.sh@294 -- # net_devs=() 00:21:30.411 13:32:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:30.411 13:32:26 -- nvmf/common.sh@295 -- # e810=() 00:21:30.411 13:32:26 -- nvmf/common.sh@295 -- # local -ga e810 00:21:30.411 13:32:26 -- nvmf/common.sh@296 -- # x722=() 00:21:30.411 13:32:26 -- nvmf/common.sh@296 -- # local -ga x722 00:21:30.411 13:32:26 -- nvmf/common.sh@297 -- # mlx=() 00:21:30.411 13:32:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:30.411 13:32:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.411 13:32:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:30.411 13:32:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:30.411 13:32:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:30.411 13:32:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:30.411 13:32:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:30.411 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:30.411 13:32:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:30.411 13:32:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:30.411 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:30.411 13:32:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:30.411 13:32:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:30.411 13:32:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.411 13:32:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:30.411 13:32:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.411 13:32:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:30.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:30.411 13:32:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.411 13:32:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:30.411 13:32:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.411 13:32:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:30.411 13:32:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.411 13:32:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:30.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:30.411 13:32:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.411 13:32:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:30.411 13:32:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:30.411 13:32:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:30.411 13:32:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:30.412 13:32:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:30.412 13:32:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.412 13:32:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.412 13:32:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.412 13:32:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:30.412 13:32:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.412 13:32:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.412 13:32:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:30.412 13:32:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.412 13:32:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.412 13:32:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:30.412 13:32:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:30.412 13:32:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.412 13:32:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.412 13:32:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.412 13:32:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.412 13:32:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:30.412 13:32:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.412 13:32:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.412 13:32:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.412 13:32:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:30.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.760 ms 00:21:30.412 00:21:30.412 --- 10.0.0.2 ping statistics --- 00:21:30.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.412 rtt min/avg/max/mdev = 0.760/0.760/0.760/0.000 ms 00:21:30.412 13:32:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:21:30.412 00:21:30.412 --- 10.0.0.1 ping statistics --- 00:21:30.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.412 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:21:30.412 13:32:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.412 13:32:26 -- nvmf/common.sh@410 -- # return 0 00:21:30.412 13:32:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:30.412 13:32:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.412 13:32:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:30.412 13:32:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:30.412 13:32:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.412 13:32:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:30.412 13:32:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:30.412 13:32:26 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:30.412 13:32:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:30.412 13:32:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:30.412 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.412 13:32:26 -- nvmf/common.sh@469 -- # nvmfpid=1004080 00:21:30.412 13:32:26 -- nvmf/common.sh@470 -- # waitforlisten 1004080 00:21:30.412 13:32:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:30.412 13:32:26 -- common/autotest_common.sh@819 -- # '[' -z 1004080 ']' 00:21:30.412 13:32:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.412 13:32:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:30.412 13:32:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.412 13:32:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:30.412 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:21:30.412 [2024-07-26 13:32:26.906574] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:30.412 [2024-07-26 13:32:26.906642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.412 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.412 [2024-07-26 13:32:26.996486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.412 [2024-07-26 13:32:27.041300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:30.412 [2024-07-26 13:32:27.041452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.412 [2024-07-26 13:32:27.041461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.412 [2024-07-26 13:32:27.041468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.412 [2024-07-26 13:32:27.041501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.412 13:32:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.412 13:32:27 -- common/autotest_common.sh@852 -- # return 0 00:21:30.412 13:32:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.412 13:32:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:30.412 13:32:27 -- common/autotest_common.sh@10 -- # set +x 00:21:30.412 13:32:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.412 13:32:27 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:30.412 13:32:27 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:30.412 true 00:21:30.674 13:32:27 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.674 13:32:27 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:30.674 13:32:28 -- target/tls.sh@82 -- # version=0 00:21:30.674 13:32:28 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:30.674 13:32:28 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:30.935 13:32:28 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.935 13:32:28 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:30.935 13:32:28 -- target/tls.sh@90 -- # version=13 00:21:30.935 13:32:28 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:30.935 13:32:28 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:31.195 13:32:28 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.195 13:32:28 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:31.460 13:32:28 -- target/tls.sh@98 -- # version=7 00:21:31.460 13:32:28 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:31.460 13:32:28 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.460 13:32:28 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:31.460 13:32:28 -- target/tls.sh@105 -- # ktls=false 00:21:31.460 13:32:28 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:31.460 13:32:28 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:31.792 13:32:29 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.792 13:32:29 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:31.792 13:32:29 -- target/tls.sh@113 -- # ktls=true 00:21:31.792 13:32:29 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:31.792 13:32:29 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:32.053 13:32:29 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:32.053 13:32:29 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:32.053 13:32:29 -- target/tls.sh@121 -- # ktls=false 00:21:32.053 13:32:29 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:32.053 13:32:29 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:32.053 13:32:29 -- target/tls.sh@49 -- # local key hash crc 00:21:32.053 13:32:29 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:32.053 13:32:29 -- target/tls.sh@51 -- # hash=01 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # gzip -1 -c 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # tail -c8 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # head -c 4 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # crc='p$H�' 00:21:32.053 13:32:29 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:32.053 13:32:29 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:32.053 13:32:29 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.053 13:32:29 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.053 13:32:29 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:32.053 13:32:29 -- target/tls.sh@49 -- # local key hash crc 00:21:32.053 13:32:29 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:32.053 13:32:29 -- target/tls.sh@51 -- # hash=01 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # head -c 4 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # gzip -1 -c 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # tail -c8 00:21:32.053 13:32:29 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:32.053 13:32:29 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:32.053 13:32:29 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:32.315 13:32:29 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:32.315 13:32:29 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:32.315 13:32:29 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:32.315 13:32:29 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:32.315 13:32:29 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:32.315 13:32:29 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:32.315 13:32:29 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:32.315 13:32:29 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:32.315 13:32:29 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:32.315 13:32:29 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:32.577 13:32:29 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:32.577 13:32:29 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:32.577 13:32:29 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.577 [2024-07-26 13:32:30.015232] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.577 13:32:30 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.838 13:32:30 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.100 [2024-07-26 13:32:30.344042] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.100 [2024-07-26 13:32:30.344250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.100 13:32:30 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.100 malloc0 00:21:33.100 13:32:30 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.361 13:32:30 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:33.622 13:32:30 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:33.622 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.629 Initializing NVMe Controllers 00:21:43.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.629 Initialization complete. Launching workers. 00:21:43.629 ======================================================== 00:21:43.629 Latency(us) 00:21:43.629 Device Information : IOPS MiB/s Average min max 00:21:43.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19670.78 76.84 3253.56 1083.63 3969.60 00:21:43.629 ======================================================== 00:21:43.629 Total : 19670.78 76.84 3253.56 1083.63 3969.60 00:21:43.629 00:21:43.629 13:32:40 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:43.629 13:32:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.629 13:32:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.629 13:32:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.629 13:32:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:43.629 13:32:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.629 13:32:40 -- target/tls.sh@28 -- # bdevperf_pid=1006947 00:21:43.629 13:32:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.629 13:32:40 -- target/tls.sh@31 -- # waitforlisten 1006947 /var/tmp/bdevperf.sock 00:21:43.629 13:32:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.629 13:32:40 -- common/autotest_common.sh@819 -- # '[' -z 1006947 ']' 00:21:43.629 13:32:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.629 13:32:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:43.629 13:32:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.629 13:32:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:43.629 13:32:40 -- common/autotest_common.sh@10 -- # set +x 00:21:43.629 [2024-07-26 13:32:40.989236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:43.629 [2024-07-26 13:32:40.989293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006947 ] 00:21:43.629 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.629 [2024-07-26 13:32:41.039457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.629 [2024-07-26 13:32:41.066028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.572 13:32:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:44.572 13:32:41 -- common/autotest_common.sh@852 -- # return 0 00:21:44.572 13:32:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:44.572 [2024-07-26 13:32:41.881757] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.572 TLSTESTn1 00:21:44.572 13:32:41 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.833 Running I/O for 10 seconds... 00:21:54.839 00:21:54.839 Latency(us) 00:21:54.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.839 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.839 Verification LBA range: start 0x0 length 0x2000 00:21:54.839 TLSTESTn1 : 10.06 1487.10 5.81 0.00 0.00 85890.23 10977.28 94371.84 00:21:54.839 =================================================================================================================== 00:21:54.839 Total : 1487.10 5.81 0.00 0.00 85890.23 10977.28 94371.84 00:21:54.839 0 00:21:54.839 13:32:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.839 13:32:52 -- target/tls.sh@45 -- # killprocess 1006947 00:21:54.839 13:32:52 -- common/autotest_common.sh@926 -- # '[' -z 1006947 ']' 00:21:54.839 13:32:52 -- common/autotest_common.sh@930 -- # kill -0 1006947 00:21:54.839 13:32:52 -- common/autotest_common.sh@931 -- # uname 00:21:54.839 13:32:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.839 13:32:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1006947 00:21:54.839 13:32:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:54.839 13:32:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:54.839 13:32:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1006947' 00:21:54.839 killing process with pid 1006947 00:21:54.839 13:32:52 -- common/autotest_common.sh@945 -- # kill 1006947 00:21:54.839 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.839 00:21:54.839 Latency(us) 00:21:54.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.839 =================================================================================================================== 00:21:54.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.839 13:32:52 -- common/autotest_common.sh@950 -- # wait 1006947 00:21:55.100 13:32:52 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:55.100 13:32:52 -- common/autotest_common.sh@640 -- # local es=0 00:21:55.100 13:32:52 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:55.100 13:32:52 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:55.100 13:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:55.100 13:32:52 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:55.100 13:32:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:55.100 13:32:52 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:55.100 13:32:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:55.100 13:32:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:55.100 13:32:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:55.101 13:32:52 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:55.101 13:32:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.101 13:32:52 -- target/tls.sh@28 -- # bdevperf_pid=1009185 00:21:55.101 13:32:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.101 13:32:52 -- target/tls.sh@31 -- # waitforlisten 1009185 /var/tmp/bdevperf.sock 00:21:55.101 13:32:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.101 13:32:52 -- common/autotest_common.sh@819 -- # '[' -z 1009185 ']' 00:21:55.101 13:32:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.101 13:32:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:55.101 13:32:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.101 13:32:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:55.101 13:32:52 -- common/autotest_common.sh@10 -- # set +x 00:21:55.101 [2024-07-26 13:32:52.393095] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:55.101 [2024-07-26 13:32:52.393153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009185 ] 00:21:55.101 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.101 [2024-07-26 13:32:52.443203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.101 [2024-07-26 13:32:52.467700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.045 13:32:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:56.045 13:32:53 -- common/autotest_common.sh@852 -- # return 0 00:21:56.045 13:32:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:56.045 [2024-07-26 13:32:53.287266] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.045 [2024-07-26 13:32:53.297191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.045 [2024-07-26 13:32:53.297377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e20e50 (107): Transport endpoint is not connected 00:21:56.045 [2024-07-26 13:32:53.298372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e20e50 (9): Bad file descriptor 00:21:56.045 [2024-07-26 13:32:53.299374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.045 [2024-07-26 13:32:53.299381] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:56.045 [2024-07-26 13:32:53.299390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.045 request: 00:21:56.045 { 00:21:56.045 "name": "TLSTEST", 00:21:56.045 "trtype": "tcp", 00:21:56.045 "traddr": "10.0.0.2", 00:21:56.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.045 "adrfam": "ipv4", 00:21:56.045 "trsvcid": "4420", 00:21:56.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.045 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:56.045 "method": "bdev_nvme_attach_controller", 00:21:56.045 "req_id": 1 00:21:56.045 } 00:21:56.045 Got JSON-RPC error response 00:21:56.045 response: 00:21:56.045 { 00:21:56.045 "code": -32602, 00:21:56.045 "message": "Invalid parameters" 00:21:56.045 } 00:21:56.045 13:32:53 -- target/tls.sh@36 -- # killprocess 1009185 00:21:56.045 13:32:53 -- common/autotest_common.sh@926 -- # '[' -z 1009185 ']' 00:21:56.045 13:32:53 -- common/autotest_common.sh@930 -- # kill -0 1009185 00:21:56.045 13:32:53 -- common/autotest_common.sh@931 -- # uname 00:21:56.045 13:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:56.045 13:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1009185 00:21:56.045 13:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:56.045 13:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:56.045 13:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1009185' 00:21:56.045 killing process with pid 1009185 00:21:56.045 13:32:53 -- common/autotest_common.sh@945 -- # kill 1009185 00:21:56.045 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.045 00:21:56.045 Latency(us) 00:21:56.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.045 =================================================================================================================== 00:21:56.045 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.045 13:32:53 -- common/autotest_common.sh@950 -- # wait 1009185 00:21:56.045 13:32:53 -- target/tls.sh@37 -- # return 1 00:21:56.045 13:32:53 -- common/autotest_common.sh@643 -- # es=1 00:21:56.045 13:32:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:56.045 13:32:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:56.045 13:32:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:56.045 13:32:53 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:56.045 13:32:53 -- common/autotest_common.sh@640 -- # local es=0 00:21:56.045 13:32:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:56.045 13:32:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:56.045 13:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.045 13:32:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:56.045 13:32:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.045 13:32:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:56.045 13:32:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.045 13:32:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.045 13:32:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:56.045 13:32:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:56.045 13:32:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.045 13:32:53 -- target/tls.sh@28 -- # bdevperf_pid=1009516 00:21:56.045 13:32:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.045 13:32:53 -- target/tls.sh@31 -- # waitforlisten 1009516 /var/tmp/bdevperf.sock 00:21:56.045 13:32:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.045 13:32:53 -- common/autotest_common.sh@819 -- # '[' -z 1009516 ']' 00:21:56.045 13:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.045 13:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.045 13:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.045 13:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.045 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:56.306 [2024-07-26 13:32:53.541330] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:56.306 [2024-07-26 13:32:53.541401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009516 ] 00:21:56.306 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.306 [2024-07-26 13:32:53.592131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.306 [2024-07-26 13:32:53.616846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.878 13:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:56.878 13:32:54 -- common/autotest_common.sh@852 -- # return 0 00:21:56.878 13:32:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:57.140 [2024-07-26 13:32:54.424395] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.140 [2024-07-26 13:32:54.429861] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:57.140 [2024-07-26 13:32:54.429880] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:57.140 [2024-07-26 13:32:54.429900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.140 [2024-07-26 13:32:54.430380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fe50 (107): Transport endpoint is not connected 00:21:57.140 [2024-07-26 13:32:54.431374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7fe50 (9): Bad file descriptor 00:21:57.140 [2024-07-26 13:32:54.432376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:57.140 [2024-07-26 13:32:54.432382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.140 [2024-07-26 13:32:54.432388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.140 request: 00:21:57.140 { 00:21:57.140 "name": "TLSTEST", 00:21:57.140 "trtype": "tcp", 00:21:57.140 "traddr": "10.0.0.2", 00:21:57.140 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.140 "adrfam": "ipv4", 00:21:57.140 "trsvcid": "4420", 00:21:57.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.140 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:57.140 "method": "bdev_nvme_attach_controller", 00:21:57.140 "req_id": 1 00:21:57.140 } 00:21:57.140 Got JSON-RPC error response 00:21:57.140 response: 00:21:57.140 { 00:21:57.140 "code": -32602, 00:21:57.140 "message": "Invalid parameters" 00:21:57.140 } 00:21:57.140 13:32:54 -- target/tls.sh@36 -- # killprocess 1009516 00:21:57.140 13:32:54 -- common/autotest_common.sh@926 -- # '[' -z 1009516 ']' 00:21:57.140 13:32:54 -- common/autotest_common.sh@930 -- # kill -0 1009516 00:21:57.140 13:32:54 -- common/autotest_common.sh@931 -- # uname 00:21:57.140 13:32:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:57.140 13:32:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1009516 00:21:57.140 13:32:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:57.140 13:32:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:57.140 13:32:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1009516' 00:21:57.140 killing process with pid 1009516 00:21:57.140 13:32:54 -- common/autotest_common.sh@945 -- # kill 1009516 00:21:57.140 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.140 00:21:57.140 Latency(us) 00:21:57.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.140 =================================================================================================================== 00:21:57.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.140 13:32:54 -- common/autotest_common.sh@950 -- # wait 1009516 00:21:57.401 13:32:54 -- target/tls.sh@37 -- # return 1 00:21:57.401 13:32:54 -- common/autotest_common.sh@643 -- # es=1 00:21:57.401 13:32:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:57.401 13:32:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:57.401 13:32:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:57.402 13:32:54 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:57.402 13:32:54 -- common/autotest_common.sh@640 -- # local es=0 00:21:57.402 13:32:54 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:57.402 13:32:54 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:57.402 13:32:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:57.402 13:32:54 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:57.402 13:32:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:57.402 13:32:54 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:57.402 13:32:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.402 13:32:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:57.402 13:32:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.402 13:32:54 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:57.402 13:32:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.402 13:32:54 -- target/tls.sh@28 -- # bdevperf_pid=1009572 00:21:57.402 13:32:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.402 13:32:54 -- target/tls.sh@31 -- # waitforlisten 1009572 /var/tmp/bdevperf.sock 00:21:57.402 13:32:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.402 13:32:54 -- common/autotest_common.sh@819 -- # '[' -z 1009572 ']' 00:21:57.402 13:32:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.402 13:32:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.402 13:32:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.402 13:32:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.402 13:32:54 -- common/autotest_common.sh@10 -- # set +x 00:21:57.402 [2024-07-26 13:32:54.665852] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:57.402 [2024-07-26 13:32:54.665906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009572 ] 00:21:57.402 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.402 [2024-07-26 13:32:54.714565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.402 [2024-07-26 13:32:54.740703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.975 13:32:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:57.975 13:32:55 -- common/autotest_common.sh@852 -- # return 0 00:21:57.975 13:32:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:58.236 [2024-07-26 13:32:55.548522] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.236 [2024-07-26 13:32:55.553027] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:58.236 [2024-07-26 13:32:55.553044] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:58.236 [2024-07-26 13:32:55.553064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.236 [2024-07-26 13:32:55.553715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5fe50 (107): Transport endpoint is not connected 00:21:58.236 [2024-07-26 13:32:55.554709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5fe50 (9): Bad file descriptor 00:21:58.236 [2024-07-26 13:32:55.555711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:58.236 [2024-07-26 13:32:55.555717] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.236 [2024-07-26 13:32:55.555722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:58.236 request: 00:21:58.236 { 00:21:58.236 "name": "TLSTEST", 00:21:58.236 "trtype": "tcp", 00:21:58.236 "traddr": "10.0.0.2", 00:21:58.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.236 "adrfam": "ipv4", 00:21:58.236 "trsvcid": "4420", 00:21:58.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.236 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:58.236 "method": "bdev_nvme_attach_controller", 00:21:58.236 "req_id": 1 00:21:58.236 } 00:21:58.236 Got JSON-RPC error response 00:21:58.236 response: 00:21:58.236 { 00:21:58.236 "code": -32602, 00:21:58.236 "message": "Invalid parameters" 00:21:58.236 } 00:21:58.236 13:32:55 -- target/tls.sh@36 -- # killprocess 1009572 00:21:58.236 13:32:55 -- common/autotest_common.sh@926 -- # '[' -z 1009572 ']' 00:21:58.236 13:32:55 -- common/autotest_common.sh@930 -- # kill -0 1009572 00:21:58.236 13:32:55 -- common/autotest_common.sh@931 -- # uname 00:21:58.236 13:32:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.236 13:32:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1009572 00:21:58.236 13:32:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:58.236 13:32:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:58.236 13:32:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1009572' 00:21:58.236 killing process with pid 1009572 00:21:58.236 13:32:55 -- common/autotest_common.sh@945 -- # kill 1009572 00:21:58.236 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.236 00:21:58.236 Latency(us) 00:21:58.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.236 =================================================================================================================== 00:21:58.236 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.236 13:32:55 -- common/autotest_common.sh@950 -- # wait 1009572 00:21:58.498 13:32:55 -- target/tls.sh@37 -- # return 1 00:21:58.498 13:32:55 -- common/autotest_common.sh@643 -- # es=1 00:21:58.498 13:32:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:58.498 13:32:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:58.498 13:32:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:58.498 13:32:55 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:58.498 13:32:55 -- common/autotest_common.sh@640 -- # local es=0 00:21:58.498 13:32:55 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:58.498 13:32:55 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:58.498 13:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:58.498 13:32:55 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:58.498 13:32:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:58.498 13:32:55 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:58.498 13:32:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.498 13:32:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:58.498 13:32:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.498 13:32:55 -- target/tls.sh@23 -- # psk= 00:21:58.498 13:32:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.498 13:32:55 -- target/tls.sh@28 -- # bdevperf_pid=1009887 00:21:58.498 13:32:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.498 13:32:55 -- target/tls.sh@31 -- # waitforlisten 1009887 /var/tmp/bdevperf.sock 00:21:58.498 13:32:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.498 13:32:55 -- common/autotest_common.sh@819 -- # '[' -z 1009887 ']' 00:21:58.498 13:32:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.498 13:32:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.498 13:32:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.498 13:32:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.498 13:32:55 -- common/autotest_common.sh@10 -- # set +x 00:21:58.498 [2024-07-26 13:32:55.785862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:58.498 [2024-07-26 13:32:55.785916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009887 ] 00:21:58.498 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.498 [2024-07-26 13:32:55.835766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.498 [2024-07-26 13:32:55.860800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.444 13:32:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.444 13:32:56 -- common/autotest_common.sh@852 -- # return 0 00:21:59.444 13:32:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:59.444 [2024-07-26 13:32:56.695435] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:59.444 [2024-07-26 13:32:56.697057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178a520 (9): Bad file descriptor 00:21:59.444 [2024-07-26 13:32:56.698055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:59.444 [2024-07-26 13:32:56.698062] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:59.444 [2024-07-26 13:32:56.698068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.444 request: 00:21:59.444 { 00:21:59.444 "name": "TLSTEST", 00:21:59.444 "trtype": "tcp", 00:21:59.444 "traddr": "10.0.0.2", 00:21:59.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.444 "adrfam": "ipv4", 00:21:59.444 "trsvcid": "4420", 00:21:59.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.444 "method": "bdev_nvme_attach_controller", 00:21:59.444 "req_id": 1 00:21:59.444 } 00:21:59.444 Got JSON-RPC error response 00:21:59.444 response: 00:21:59.444 { 00:21:59.444 "code": -32602, 00:21:59.444 "message": "Invalid parameters" 00:21:59.444 } 00:21:59.444 13:32:56 -- target/tls.sh@36 -- # killprocess 1009887 00:21:59.444 13:32:56 -- common/autotest_common.sh@926 -- # '[' -z 1009887 ']' 00:21:59.444 13:32:56 -- common/autotest_common.sh@930 -- # kill -0 1009887 00:21:59.444 13:32:56 -- common/autotest_common.sh@931 -- # uname 00:21:59.444 13:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.444 13:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1009887 00:21:59.444 13:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:59.444 13:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:59.444 13:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1009887' 00:21:59.444 killing process with pid 1009887 00:21:59.444 13:32:56 -- common/autotest_common.sh@945 -- # kill 1009887 00:21:59.444 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.444 00:21:59.444 Latency(us) 00:21:59.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.444 =================================================================================================================== 00:21:59.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.444 13:32:56 -- common/autotest_common.sh@950 -- # wait 1009887 00:21:59.444 13:32:56 -- target/tls.sh@37 -- # return 1 00:21:59.444 13:32:56 -- common/autotest_common.sh@643 -- # es=1 00:21:59.444 13:32:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.444 13:32:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.444 13:32:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.444 13:32:56 -- target/tls.sh@167 -- # killprocess 1004080 00:21:59.444 13:32:56 -- common/autotest_common.sh@926 -- # '[' -z 1004080 ']' 00:21:59.444 13:32:56 -- common/autotest_common.sh@930 -- # kill -0 1004080 00:21:59.444 13:32:56 -- common/autotest_common.sh@931 -- # uname 00:21:59.444 13:32:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.444 13:32:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1004080 00:21:59.718 13:32:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:59.718 13:32:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:59.718 13:32:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1004080' 00:21:59.718 killing process with pid 1004080 00:21:59.718 13:32:56 -- common/autotest_common.sh@945 -- # kill 1004080 00:21:59.718 13:32:56 -- common/autotest_common.sh@950 -- # wait 1004080 00:21:59.718 13:32:57 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:59.718 13:32:57 -- target/tls.sh@49 -- # local key hash crc 00:21:59.718 13:32:57 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:59.718 13:32:57 -- target/tls.sh@51 -- # hash=02 00:21:59.718 13:32:57 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:59.718 13:32:57 -- target/tls.sh@52 -- # gzip -1 -c 00:21:59.718 13:32:57 -- target/tls.sh@52 -- # tail -c8 00:21:59.718 13:32:57 -- target/tls.sh@52 -- # head -c 4 00:21:59.718 13:32:57 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:59.718 13:32:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:59.718 13:32:57 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:59.718 13:32:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:59.718 13:32:57 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:59.718 13:32:57 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:59.718 13:32:57 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:59.718 13:32:57 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:59.718 13:32:57 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:59.718 13:32:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.718 13:32:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:59.718 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:21:59.718 13:32:57 -- nvmf/common.sh@469 -- # nvmfpid=1010249 00:21:59.718 13:32:57 -- nvmf/common.sh@470 -- # waitforlisten 1010249 00:21:59.718 13:32:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.718 13:32:57 -- common/autotest_common.sh@819 -- # '[' -z 1010249 ']' 00:21:59.718 13:32:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.718 13:32:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.719 13:32:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.719 13:32:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.719 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:21:59.719 [2024-07-26 13:32:57.127258] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:59.719 [2024-07-26 13:32:57.127314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.719 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.028 [2024-07-26 13:32:57.209836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.028 [2024-07-26 13:32:57.237245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.028 [2024-07-26 13:32:57.237339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.028 [2024-07-26 13:32:57.237345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.028 [2024-07-26 13:32:57.237350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.028 [2024-07-26 13:32:57.237363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.601 13:32:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.601 13:32:57 -- common/autotest_common.sh@852 -- # return 0 00:22:00.601 13:32:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.601 13:32:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.601 13:32:57 -- common/autotest_common.sh@10 -- # set +x 00:22:00.601 13:32:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.601 13:32:57 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:00.601 13:32:57 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:00.601 13:32:57 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.601 [2024-07-26 13:32:58.053883] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.601 13:32:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:00.862 13:32:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.123 [2024-07-26 13:32:58.338580] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.123 [2024-07-26 13:32:58.338778] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.123 13:32:58 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.123 malloc0 00:22:01.123 13:32:58 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:01.385 13:32:58 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:01.385 13:32:58 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:01.385 13:32:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.385 13:32:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.385 13:32:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:01.385 13:32:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:01.385 13:32:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.385 13:32:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.385 13:32:58 -- target/tls.sh@28 -- # bdevperf_pid=1010614 00:22:01.385 13:32:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.385 13:32:58 -- target/tls.sh@31 -- # waitforlisten 1010614 /var/tmp/bdevperf.sock 00:22:01.385 13:32:58 -- common/autotest_common.sh@819 -- # '[' -z 1010614 ']' 00:22:01.385 13:32:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.385 13:32:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.385 13:32:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.385 13:32:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.385 13:32:58 -- common/autotest_common.sh@10 -- # set +x 00:22:01.385 [2024-07-26 13:32:58.802165] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:01.385 [2024-07-26 13:32:58.802385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010614 ] 00:22:01.385 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.385 [2024-07-26 13:32:58.844938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.646 [2024-07-26 13:32:58.871176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.647 13:32:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.647 13:32:58 -- common/autotest_common.sh@852 -- # return 0 00:22:01.647 13:32:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:01.647 [2024-07-26 13:32:59.073202] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.908 TLSTESTn1 00:22:01.908 13:32:59 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:01.908 Running I/O for 10 seconds... 00:22:11.914 00:22:11.914 Latency(us) 00:22:11.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.914 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:11.914 Verification LBA range: start 0x0 length 0x2000 00:22:11.914 TLSTESTn1 : 10.07 1487.76 5.81 0.00 0.00 85851.03 8956.59 88255.15 00:22:11.914 =================================================================================================================== 00:22:11.914 Total : 1487.76 5.81 0.00 0.00 85851.03 8956.59 88255.15 00:22:11.914 0 00:22:11.914 13:33:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.914 13:33:09 -- target/tls.sh@45 -- # killprocess 1010614 00:22:11.914 13:33:09 -- common/autotest_common.sh@926 -- # '[' -z 1010614 ']' 00:22:11.914 13:33:09 -- common/autotest_common.sh@930 -- # kill -0 1010614 00:22:11.914 13:33:09 -- common/autotest_common.sh@931 -- # uname 00:22:11.914 13:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:11.914 13:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1010614 00:22:12.176 13:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:12.176 13:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:12.176 13:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1010614' 00:22:12.176 killing process with pid 1010614 00:22:12.176 13:33:09 -- common/autotest_common.sh@945 -- # kill 1010614 00:22:12.176 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.176 00:22:12.176 Latency(us) 00:22:12.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.176 =================================================================================================================== 00:22:12.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.176 13:33:09 -- common/autotest_common.sh@950 -- # wait 1010614 00:22:12.176 13:33:09 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:12.176 13:33:09 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:12.176 13:33:09 -- common/autotest_common.sh@640 -- # local es=0 00:22:12.176 13:33:09 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:12.176 13:33:09 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:12.176 13:33:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.176 13:33:09 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:12.176 13:33:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:12.176 13:33:09 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:12.176 13:33:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.176 13:33:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.176 13:33:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.176 13:33:09 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:12.176 13:33:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.176 13:33:09 -- target/tls.sh@28 -- # bdevperf_pid=1012917 00:22:12.176 13:33:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.176 13:33:09 -- target/tls.sh@31 -- # waitforlisten 1012917 /var/tmp/bdevperf.sock 00:22:12.176 13:33:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.176 13:33:09 -- common/autotest_common.sh@819 -- # '[' -z 1012917 ']' 00:22:12.176 13:33:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.176 13:33:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:12.176 13:33:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.176 13:33:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:12.176 13:33:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.176 [2024-07-26 13:33:09.580288] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:12.176 [2024-07-26 13:33:09.580343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012917 ] 00:22:12.176 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.176 [2024-07-26 13:33:09.637052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.437 [2024-07-26 13:33:09.662353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.009 13:33:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:13.009 13:33:10 -- common/autotest_common.sh@852 -- # return 0 00:22:13.009 13:33:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:13.271 [2024-07-26 13:33:10.485893] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.271 [2024-07-26 13:33:10.485929] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:13.271 request: 00:22:13.271 { 00:22:13.271 "name": "TLSTEST", 00:22:13.271 "trtype": "tcp", 00:22:13.271 "traddr": "10.0.0.2", 00:22:13.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.271 "adrfam": "ipv4", 00:22:13.271 "trsvcid": "4420", 00:22:13.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.271 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:13.271 "method": "bdev_nvme_attach_controller", 00:22:13.271 "req_id": 1 00:22:13.271 } 00:22:13.271 Got JSON-RPC error response 00:22:13.271 response: 00:22:13.271 { 00:22:13.271 "code": -22, 00:22:13.271 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:13.271 } 00:22:13.271 13:33:10 -- target/tls.sh@36 -- # killprocess 1012917 00:22:13.271 13:33:10 -- common/autotest_common.sh@926 -- # '[' -z 1012917 ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@930 -- # kill -0 1012917 00:22:13.271 13:33:10 -- common/autotest_common.sh@931 -- # uname 00:22:13.271 13:33:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1012917 00:22:13.271 13:33:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:13.271 13:33:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1012917' 00:22:13.271 killing process with pid 1012917 00:22:13.271 13:33:10 -- common/autotest_common.sh@945 -- # kill 1012917 00:22:13.271 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.271 00:22:13.271 Latency(us) 00:22:13.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.271 =================================================================================================================== 00:22:13.271 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.271 13:33:10 -- common/autotest_common.sh@950 -- # wait 1012917 00:22:13.271 13:33:10 -- target/tls.sh@37 -- # return 1 00:22:13.271 13:33:10 -- common/autotest_common.sh@643 -- # es=1 00:22:13.271 13:33:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:13.271 13:33:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:13.271 13:33:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:13.271 13:33:10 -- target/tls.sh@183 -- # killprocess 1010249 00:22:13.271 13:33:10 -- common/autotest_common.sh@926 -- # '[' -z 1010249 ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@930 -- # kill -0 1010249 00:22:13.271 13:33:10 -- common/autotest_common.sh@931 -- # uname 00:22:13.271 13:33:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1010249 00:22:13.271 13:33:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:13.271 13:33:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:13.271 13:33:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1010249' 00:22:13.271 killing process with pid 1010249 00:22:13.271 13:33:10 -- common/autotest_common.sh@945 -- # kill 1010249 00:22:13.271 13:33:10 -- common/autotest_common.sh@950 -- # wait 1010249 00:22:13.533 13:33:10 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:13.533 13:33:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.533 13:33:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:13.533 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 13:33:10 -- nvmf/common.sh@469 -- # nvmfpid=1013472 00:22:13.533 13:33:10 -- nvmf/common.sh@470 -- # waitforlisten 1013472 00:22:13.533 13:33:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:13.533 13:33:10 -- common/autotest_common.sh@819 -- # '[' -z 1013472 ']' 00:22:13.533 13:33:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.533 13:33:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.533 13:33:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.533 13:33:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.533 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 [2024-07-26 13:33:10.896436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:13.533 [2024-07-26 13:33:10.896514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.533 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.533 [2024-07-26 13:33:10.983276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.795 [2024-07-26 13:33:11.010148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.795 [2024-07-26 13:33:11.010247] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.795 [2024-07-26 13:33:11.010253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.795 [2024-07-26 13:33:11.010258] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.795 [2024-07-26 13:33:11.010277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.367 13:33:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.367 13:33:11 -- common/autotest_common.sh@852 -- # return 0 00:22:14.367 13:33:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:14.367 13:33:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:14.367 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:22:14.367 13:33:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.367 13:33:11 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:14.367 13:33:11 -- common/autotest_common.sh@640 -- # local es=0 00:22:14.367 13:33:11 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:14.367 13:33:11 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:14.367 13:33:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:14.367 13:33:11 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:14.367 13:33:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:14.367 13:33:11 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:14.367 13:33:11 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:14.367 13:33:11 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:14.367 [2024-07-26 13:33:11.822643] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.367 13:33:11 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:14.629 13:33:11 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:14.891 [2024-07-26 13:33:12.103340] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.891 [2024-07-26 13:33:12.103538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.891 13:33:12 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:14.891 malloc0 00:22:14.891 13:33:12 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:15.152 13:33:12 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:15.152 [2024-07-26 13:33:12.550352] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:15.152 [2024-07-26 13:33:12.550374] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:15.152 [2024-07-26 13:33:12.550387] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:15.152 request: 00:22:15.152 { 00:22:15.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.152 "host": "nqn.2016-06.io.spdk:host1", 00:22:15.152 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:15.152 "method": "nvmf_subsystem_add_host", 00:22:15.152 "req_id": 1 00:22:15.152 } 00:22:15.152 Got JSON-RPC error response 00:22:15.152 response: 00:22:15.152 { 00:22:15.152 "code": -32603, 00:22:15.152 "message": "Internal error" 00:22:15.152 } 00:22:15.152 13:33:12 -- common/autotest_common.sh@643 -- # es=1 00:22:15.152 13:33:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:15.152 13:33:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:15.152 13:33:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:15.152 13:33:12 -- target/tls.sh@189 -- # killprocess 1013472 00:22:15.152 13:33:12 -- common/autotest_common.sh@926 -- # '[' -z 1013472 ']' 00:22:15.152 13:33:12 -- common/autotest_common.sh@930 -- # kill -0 1013472 00:22:15.152 13:33:12 -- common/autotest_common.sh@931 -- # uname 00:22:15.152 13:33:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:15.152 13:33:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1013472 00:22:15.152 13:33:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:15.152 13:33:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:15.152 13:33:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1013472' 00:22:15.152 killing process with pid 1013472 00:22:15.152 13:33:12 -- common/autotest_common.sh@945 -- # kill 1013472 00:22:15.413 13:33:12 -- common/autotest_common.sh@950 -- # wait 1013472 00:22:15.413 13:33:12 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:15.413 13:33:12 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:15.413 13:33:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:15.413 13:33:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:15.413 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:22:15.413 13:33:12 -- nvmf/common.sh@469 -- # nvmfpid=1013941 00:22:15.413 13:33:12 -- nvmf/common.sh@470 -- # waitforlisten 1013941 00:22:15.413 13:33:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:15.413 13:33:12 -- common/autotest_common.sh@819 -- # '[' -z 1013941 ']' 00:22:15.413 13:33:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.413 13:33:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.413 13:33:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.413 13:33:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.413 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:22:15.413 [2024-07-26 13:33:12.802139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:15.413 [2024-07-26 13:33:12.802199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.413 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.413 [2024-07-26 13:33:12.882646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.674 [2024-07-26 13:33:12.909175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:15.674 [2024-07-26 13:33:12.909275] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.674 [2024-07-26 13:33:12.909281] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.674 [2024-07-26 13:33:12.909287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.674 [2024-07-26 13:33:12.909300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.267 13:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.267 13:33:13 -- common/autotest_common.sh@852 -- # return 0 00:22:16.267 13:33:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:16.267 13:33:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:16.267 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.267 13:33:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.267 13:33:13 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.267 13:33:13 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:16.267 13:33:13 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:16.267 [2024-07-26 13:33:13.713570] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.267 13:33:13 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:16.529 13:33:13 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:16.790 [2024-07-26 13:33:14.010301] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.790 [2024-07-26 13:33:14.010503] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.790 13:33:14 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.790 malloc0 00:22:16.790 13:33:14 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:17.051 13:33:14 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:17.051 13:33:14 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:17.051 13:33:14 -- target/tls.sh@197 -- # bdevperf_pid=1014310 00:22:17.051 13:33:14 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.051 13:33:14 -- target/tls.sh@200 -- # waitforlisten 1014310 /var/tmp/bdevperf.sock 00:22:17.051 13:33:14 -- common/autotest_common.sh@819 -- # '[' -z 1014310 ']' 00:22:17.051 13:33:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.051 13:33:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:17.051 13:33:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.051 13:33:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:17.051 13:33:14 -- common/autotest_common.sh@10 -- # set +x 00:22:17.051 [2024-07-26 13:33:14.482937] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:17.051 [2024-07-26 13:33:14.482984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014310 ] 00:22:17.051 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.312 [2024-07-26 13:33:14.531750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.313 [2024-07-26 13:33:14.558208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.885 13:33:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:17.885 13:33:15 -- common/autotest_common.sh@852 -- # return 0 00:22:17.885 13:33:15 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:18.147 [2024-07-26 13:33:15.385926] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.147 TLSTESTn1 00:22:18.147 13:33:15 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:18.407 13:33:15 -- target/tls.sh@205 -- # tgtconf='{ 00:22:18.407 "subsystems": [ 00:22:18.407 { 00:22:18.407 "subsystem": "iobuf", 00:22:18.407 "config": [ 00:22:18.407 { 00:22:18.407 "method": "iobuf_set_options", 00:22:18.407 "params": { 00:22:18.407 "small_pool_count": 8192, 00:22:18.407 "large_pool_count": 1024, 00:22:18.407 "small_bufsize": 8192, 00:22:18.407 "large_bufsize": 135168 00:22:18.407 } 00:22:18.407 } 00:22:18.407 ] 00:22:18.407 }, 00:22:18.407 { 00:22:18.407 "subsystem": "sock", 00:22:18.407 "config": [ 00:22:18.407 { 00:22:18.407 "method": "sock_impl_set_options", 00:22:18.408 "params": { 00:22:18.408 "impl_name": "posix", 00:22:18.408 "recv_buf_size": 2097152, 00:22:18.408 "send_buf_size": 2097152, 00:22:18.408 "enable_recv_pipe": true, 00:22:18.408 "enable_quickack": false, 00:22:18.408 "enable_placement_id": 0, 00:22:18.408 "enable_zerocopy_send_server": true, 00:22:18.408 "enable_zerocopy_send_client": false, 00:22:18.408 "zerocopy_threshold": 0, 00:22:18.408 "tls_version": 0, 00:22:18.408 "enable_ktls": false 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "sock_impl_set_options", 00:22:18.408 "params": { 00:22:18.408 "impl_name": "ssl", 00:22:18.408 "recv_buf_size": 4096, 00:22:18.408 "send_buf_size": 4096, 00:22:18.408 "enable_recv_pipe": true, 00:22:18.408 "enable_quickack": false, 00:22:18.408 "enable_placement_id": 0, 00:22:18.408 "enable_zerocopy_send_server": true, 00:22:18.408 "enable_zerocopy_send_client": false, 00:22:18.408 "zerocopy_threshold": 0, 00:22:18.408 "tls_version": 0, 00:22:18.408 "enable_ktls": false 00:22:18.408 } 00:22:18.408 } 00:22:18.408 ] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "vmd", 00:22:18.408 "config": [] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "accel", 00:22:18.408 "config": [ 00:22:18.408 { 00:22:18.408 "method": "accel_set_options", 00:22:18.408 "params": { 00:22:18.408 "small_cache_size": 128, 00:22:18.408 "large_cache_size": 16, 00:22:18.408 "task_count": 2048, 00:22:18.408 "sequence_count": 2048, 00:22:18.408 "buf_count": 2048 00:22:18.408 } 00:22:18.408 } 00:22:18.408 ] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "bdev", 00:22:18.408 "config": [ 00:22:18.408 { 00:22:18.408 "method": "bdev_set_options", 00:22:18.408 "params": { 00:22:18.408 "bdev_io_pool_size": 65535, 00:22:18.408 "bdev_io_cache_size": 256, 00:22:18.408 "bdev_auto_examine": true, 00:22:18.408 "iobuf_small_cache_size": 128, 00:22:18.408 "iobuf_large_cache_size": 16 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_raid_set_options", 00:22:18.408 "params": { 00:22:18.408 "process_window_size_kb": 1024 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_iscsi_set_options", 00:22:18.408 "params": { 00:22:18.408 "timeout_sec": 30 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_nvme_set_options", 00:22:18.408 "params": { 00:22:18.408 "action_on_timeout": "none", 00:22:18.408 "timeout_us": 0, 00:22:18.408 "timeout_admin_us": 0, 00:22:18.408 "keep_alive_timeout_ms": 10000, 00:22:18.408 "transport_retry_count": 4, 00:22:18.408 "arbitration_burst": 0, 00:22:18.408 "low_priority_weight": 0, 00:22:18.408 "medium_priority_weight": 0, 00:22:18.408 "high_priority_weight": 0, 00:22:18.408 "nvme_adminq_poll_period_us": 10000, 00:22:18.408 "nvme_ioq_poll_period_us": 0, 00:22:18.408 "io_queue_requests": 0, 00:22:18.408 "delay_cmd_submit": true, 00:22:18.408 "bdev_retry_count": 3, 00:22:18.408 "transport_ack_timeout": 0, 00:22:18.408 "ctrlr_loss_timeout_sec": 0, 00:22:18.408 "reconnect_delay_sec": 0, 00:22:18.408 "fast_io_fail_timeout_sec": 0, 00:22:18.408 "generate_uuids": false, 00:22:18.408 "transport_tos": 0, 00:22:18.408 "io_path_stat": false, 00:22:18.408 "allow_accel_sequence": false 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_nvme_set_hotplug", 00:22:18.408 "params": { 00:22:18.408 "period_us": 100000, 00:22:18.408 "enable": false 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_malloc_create", 00:22:18.408 "params": { 00:22:18.408 "name": "malloc0", 00:22:18.408 "num_blocks": 8192, 00:22:18.408 "block_size": 4096, 00:22:18.408 "physical_block_size": 4096, 00:22:18.408 "uuid": "1d1fe755-915f-4e02-93bc-dc3e418cd4c2", 00:22:18.408 "optimal_io_boundary": 0 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "bdev_wait_for_examine" 00:22:18.408 } 00:22:18.408 ] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "nbd", 00:22:18.408 "config": [] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "scheduler", 00:22:18.408 "config": [ 00:22:18.408 { 00:22:18.408 "method": "framework_set_scheduler", 00:22:18.408 "params": { 00:22:18.408 "name": "static" 00:22:18.408 } 00:22:18.408 } 00:22:18.408 ] 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "subsystem": "nvmf", 00:22:18.408 "config": [ 00:22:18.408 { 00:22:18.408 "method": "nvmf_set_config", 00:22:18.408 "params": { 00:22:18.408 "discovery_filter": "match_any", 00:22:18.408 "admin_cmd_passthru": { 00:22:18.408 "identify_ctrlr": false 00:22:18.408 } 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_set_max_subsystems", 00:22:18.408 "params": { 00:22:18.408 "max_subsystems": 1024 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_set_crdt", 00:22:18.408 "params": { 00:22:18.408 "crdt1": 0, 00:22:18.408 "crdt2": 0, 00:22:18.408 "crdt3": 0 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_create_transport", 00:22:18.408 "params": { 00:22:18.408 "trtype": "TCP", 00:22:18.408 "max_queue_depth": 128, 00:22:18.408 "max_io_qpairs_per_ctrlr": 127, 00:22:18.408 "in_capsule_data_size": 4096, 00:22:18.408 "max_io_size": 131072, 00:22:18.408 "io_unit_size": 131072, 00:22:18.408 "max_aq_depth": 128, 00:22:18.408 "num_shared_buffers": 511, 00:22:18.408 "buf_cache_size": 4294967295, 00:22:18.408 "dif_insert_or_strip": false, 00:22:18.408 "zcopy": false, 00:22:18.408 "c2h_success": false, 00:22:18.408 "sock_priority": 0, 00:22:18.408 "abort_timeout_sec": 1 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_create_subsystem", 00:22:18.408 "params": { 00:22:18.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.408 "allow_any_host": false, 00:22:18.408 "serial_number": "SPDK00000000000001", 00:22:18.408 "model_number": "SPDK bdev Controller", 00:22:18.408 "max_namespaces": 10, 00:22:18.408 "min_cntlid": 1, 00:22:18.408 "max_cntlid": 65519, 00:22:18.408 "ana_reporting": false 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_subsystem_add_host", 00:22:18.408 "params": { 00:22:18.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.408 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.408 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_subsystem_add_ns", 00:22:18.408 "params": { 00:22:18.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.408 "namespace": { 00:22:18.408 "nsid": 1, 00:22:18.408 "bdev_name": "malloc0", 00:22:18.408 "nguid": "1D1FE755915F4E0293BCDC3E418CD4C2", 00:22:18.408 "uuid": "1d1fe755-915f-4e02-93bc-dc3e418cd4c2" 00:22:18.408 } 00:22:18.408 } 00:22:18.408 }, 00:22:18.408 { 00:22:18.408 "method": "nvmf_subsystem_add_listener", 00:22:18.408 "params": { 00:22:18.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.408 "listen_address": { 00:22:18.408 "trtype": "TCP", 00:22:18.408 "adrfam": "IPv4", 00:22:18.408 "traddr": "10.0.0.2", 00:22:18.408 "trsvcid": "4420" 00:22:18.408 }, 00:22:18.408 "secure_channel": true 00:22:18.408 } 00:22:18.408 } 00:22:18.409 ] 00:22:18.409 } 00:22:18.409 ] 00:22:18.409 }' 00:22:18.409 13:33:15 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:18.670 13:33:15 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:18.670 "subsystems": [ 00:22:18.670 { 00:22:18.670 "subsystem": "iobuf", 00:22:18.670 "config": [ 00:22:18.670 { 00:22:18.670 "method": "iobuf_set_options", 00:22:18.670 "params": { 00:22:18.670 "small_pool_count": 8192, 00:22:18.670 "large_pool_count": 1024, 00:22:18.670 "small_bufsize": 8192, 00:22:18.670 "large_bufsize": 135168 00:22:18.670 } 00:22:18.670 } 00:22:18.670 ] 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "subsystem": "sock", 00:22:18.670 "config": [ 00:22:18.670 { 00:22:18.670 "method": "sock_impl_set_options", 00:22:18.670 "params": { 00:22:18.670 "impl_name": "posix", 00:22:18.670 "recv_buf_size": 2097152, 00:22:18.670 "send_buf_size": 2097152, 00:22:18.670 "enable_recv_pipe": true, 00:22:18.670 "enable_quickack": false, 00:22:18.670 "enable_placement_id": 0, 00:22:18.670 "enable_zerocopy_send_server": true, 00:22:18.670 "enable_zerocopy_send_client": false, 00:22:18.670 "zerocopy_threshold": 0, 00:22:18.670 "tls_version": 0, 00:22:18.670 "enable_ktls": false 00:22:18.670 } 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "method": "sock_impl_set_options", 00:22:18.670 "params": { 00:22:18.670 "impl_name": "ssl", 00:22:18.670 "recv_buf_size": 4096, 00:22:18.670 "send_buf_size": 4096, 00:22:18.670 "enable_recv_pipe": true, 00:22:18.670 "enable_quickack": false, 00:22:18.670 "enable_placement_id": 0, 00:22:18.670 "enable_zerocopy_send_server": true, 00:22:18.670 "enable_zerocopy_send_client": false, 00:22:18.670 "zerocopy_threshold": 0, 00:22:18.670 "tls_version": 0, 00:22:18.670 "enable_ktls": false 00:22:18.670 } 00:22:18.670 } 00:22:18.670 ] 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "subsystem": "vmd", 00:22:18.670 "config": [] 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "subsystem": "accel", 00:22:18.670 "config": [ 00:22:18.670 { 00:22:18.670 "method": "accel_set_options", 00:22:18.670 "params": { 00:22:18.670 "small_cache_size": 128, 00:22:18.670 "large_cache_size": 16, 00:22:18.670 "task_count": 2048, 00:22:18.670 "sequence_count": 2048, 00:22:18.670 "buf_count": 2048 00:22:18.670 } 00:22:18.670 } 00:22:18.670 ] 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "subsystem": "bdev", 00:22:18.670 "config": [ 00:22:18.670 { 00:22:18.670 "method": "bdev_set_options", 00:22:18.670 "params": { 00:22:18.670 "bdev_io_pool_size": 65535, 00:22:18.670 "bdev_io_cache_size": 256, 00:22:18.670 "bdev_auto_examine": true, 00:22:18.670 "iobuf_small_cache_size": 128, 00:22:18.670 "iobuf_large_cache_size": 16 00:22:18.670 } 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "method": "bdev_raid_set_options", 00:22:18.670 "params": { 00:22:18.670 "process_window_size_kb": 1024 00:22:18.670 } 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "method": "bdev_iscsi_set_options", 00:22:18.670 "params": { 00:22:18.670 "timeout_sec": 30 00:22:18.670 } 00:22:18.670 }, 00:22:18.670 { 00:22:18.670 "method": "bdev_nvme_set_options", 00:22:18.670 "params": { 00:22:18.671 "action_on_timeout": "none", 00:22:18.671 "timeout_us": 0, 00:22:18.671 "timeout_admin_us": 0, 00:22:18.671 "keep_alive_timeout_ms": 10000, 00:22:18.671 "transport_retry_count": 4, 00:22:18.671 "arbitration_burst": 0, 00:22:18.671 "low_priority_weight": 0, 00:22:18.671 "medium_priority_weight": 0, 00:22:18.671 "high_priority_weight": 0, 00:22:18.671 "nvme_adminq_poll_period_us": 10000, 00:22:18.671 "nvme_ioq_poll_period_us": 0, 00:22:18.671 "io_queue_requests": 512, 00:22:18.671 "delay_cmd_submit": true, 00:22:18.671 "bdev_retry_count": 3, 00:22:18.671 "transport_ack_timeout": 0, 00:22:18.671 "ctrlr_loss_timeout_sec": 0, 00:22:18.671 "reconnect_delay_sec": 0, 00:22:18.671 "fast_io_fail_timeout_sec": 0, 00:22:18.671 "generate_uuids": false, 00:22:18.671 "transport_tos": 0, 00:22:18.671 "io_path_stat": false, 00:22:18.671 "allow_accel_sequence": false 00:22:18.671 } 00:22:18.671 }, 00:22:18.671 { 00:22:18.671 "method": "bdev_nvme_attach_controller", 00:22:18.671 "params": { 00:22:18.671 "name": "TLSTEST", 00:22:18.671 "trtype": "TCP", 00:22:18.671 "adrfam": "IPv4", 00:22:18.671 "traddr": "10.0.0.2", 00:22:18.671 "trsvcid": "4420", 00:22:18.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.671 "prchk_reftag": false, 00:22:18.671 "prchk_guard": false, 00:22:18.671 "ctrlr_loss_timeout_sec": 0, 00:22:18.671 "reconnect_delay_sec": 0, 00:22:18.671 "fast_io_fail_timeout_sec": 0, 00:22:18.671 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:18.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.671 "hdgst": false, 00:22:18.671 "ddgst": false 00:22:18.671 } 00:22:18.671 }, 00:22:18.671 { 00:22:18.671 "method": "bdev_nvme_set_hotplug", 00:22:18.671 "params": { 00:22:18.671 "period_us": 100000, 00:22:18.671 "enable": false 00:22:18.671 } 00:22:18.671 }, 00:22:18.671 { 00:22:18.671 "method": "bdev_wait_for_examine" 00:22:18.671 } 00:22:18.671 ] 00:22:18.671 }, 00:22:18.671 { 00:22:18.671 "subsystem": "nbd", 00:22:18.671 "config": [] 00:22:18.671 } 00:22:18.671 ] 00:22:18.671 }' 00:22:18.671 13:33:15 -- target/tls.sh@208 -- # killprocess 1014310 00:22:18.671 13:33:15 -- common/autotest_common.sh@926 -- # '[' -z 1014310 ']' 00:22:18.671 13:33:15 -- common/autotest_common.sh@930 -- # kill -0 1014310 00:22:18.671 13:33:15 -- common/autotest_common.sh@931 -- # uname 00:22:18.671 13:33:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:18.671 13:33:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1014310 00:22:18.671 13:33:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:18.671 13:33:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:18.671 13:33:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1014310' 00:22:18.671 killing process with pid 1014310 00:22:18.671 13:33:16 -- common/autotest_common.sh@945 -- # kill 1014310 00:22:18.671 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.671 00:22:18.671 Latency(us) 00:22:18.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.671 =================================================================================================================== 00:22:18.671 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:18.671 13:33:16 -- common/autotest_common.sh@950 -- # wait 1014310 00:22:18.671 13:33:16 -- target/tls.sh@209 -- # killprocess 1013941 00:22:18.671 13:33:16 -- common/autotest_common.sh@926 -- # '[' -z 1013941 ']' 00:22:18.671 13:33:16 -- common/autotest_common.sh@930 -- # kill -0 1013941 00:22:18.671 13:33:16 -- common/autotest_common.sh@931 -- # uname 00:22:18.671 13:33:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:18.671 13:33:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1013941 00:22:18.933 13:33:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:18.933 13:33:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:18.933 13:33:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1013941' 00:22:18.933 killing process with pid 1013941 00:22:18.933 13:33:16 -- common/autotest_common.sh@945 -- # kill 1013941 00:22:18.933 13:33:16 -- common/autotest_common.sh@950 -- # wait 1013941 00:22:18.933 13:33:16 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:18.933 13:33:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:18.933 13:33:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:18.933 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:18.933 13:33:16 -- target/tls.sh@212 -- # echo '{ 00:22:18.933 "subsystems": [ 00:22:18.933 { 00:22:18.933 "subsystem": "iobuf", 00:22:18.933 "config": [ 00:22:18.933 { 00:22:18.933 "method": "iobuf_set_options", 00:22:18.933 "params": { 00:22:18.933 "small_pool_count": 8192, 00:22:18.933 "large_pool_count": 1024, 00:22:18.933 "small_bufsize": 8192, 00:22:18.933 "large_bufsize": 135168 00:22:18.933 } 00:22:18.933 } 00:22:18.933 ] 00:22:18.933 }, 00:22:18.933 { 00:22:18.933 "subsystem": "sock", 00:22:18.933 "config": [ 00:22:18.933 { 00:22:18.933 "method": "sock_impl_set_options", 00:22:18.933 "params": { 00:22:18.933 "impl_name": "posix", 00:22:18.933 "recv_buf_size": 2097152, 00:22:18.933 "send_buf_size": 2097152, 00:22:18.933 "enable_recv_pipe": true, 00:22:18.933 "enable_quickack": false, 00:22:18.933 "enable_placement_id": 0, 00:22:18.933 "enable_zerocopy_send_server": true, 00:22:18.933 "enable_zerocopy_send_client": false, 00:22:18.933 "zerocopy_threshold": 0, 00:22:18.933 "tls_version": 0, 00:22:18.933 "enable_ktls": false 00:22:18.933 } 00:22:18.933 }, 00:22:18.933 { 00:22:18.933 "method": "sock_impl_set_options", 00:22:18.933 "params": { 00:22:18.933 "impl_name": "ssl", 00:22:18.933 "recv_buf_size": 4096, 00:22:18.933 "send_buf_size": 4096, 00:22:18.933 "enable_recv_pipe": true, 00:22:18.933 "enable_quickack": false, 00:22:18.933 "enable_placement_id": 0, 00:22:18.933 "enable_zerocopy_send_server": true, 00:22:18.934 "enable_zerocopy_send_client": false, 00:22:18.934 "zerocopy_threshold": 0, 00:22:18.934 "tls_version": 0, 00:22:18.934 "enable_ktls": false 00:22:18.934 } 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "vmd", 00:22:18.934 "config": [] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "accel", 00:22:18.934 "config": [ 00:22:18.934 { 00:22:18.934 "method": "accel_set_options", 00:22:18.934 "params": { 00:22:18.934 "small_cache_size": 128, 00:22:18.934 "large_cache_size": 16, 00:22:18.934 "task_count": 2048, 00:22:18.934 "sequence_count": 2048, 00:22:18.934 "buf_count": 2048 00:22:18.934 } 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "bdev", 00:22:18.934 "config": [ 00:22:18.934 { 00:22:18.934 "method": "bdev_set_options", 00:22:18.934 "params": { 00:22:18.934 "bdev_io_pool_size": 65535, 00:22:18.934 "bdev_io_cache_size": 256, 00:22:18.934 "bdev_auto_examine": true, 00:22:18.934 "iobuf_small_cache_size": 128, 00:22:18.934 "iobuf_large_cache_size": 16 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_raid_set_options", 00:22:18.934 "params": { 00:22:18.934 "process_window_size_kb": 1024 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_iscsi_set_options", 00:22:18.934 "params": { 00:22:18.934 "timeout_sec": 30 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_nvme_set_options", 00:22:18.934 "params": { 00:22:18.934 "action_on_timeout": "none", 00:22:18.934 "timeout_us": 0, 00:22:18.934 "timeout_admin_us": 0, 00:22:18.934 "keep_alive_timeout_ms": 10000, 00:22:18.934 "transport_retry_count": 4, 00:22:18.934 "arbitration_burst": 0, 00:22:18.934 "low_priority_weight": 0, 00:22:18.934 "medium_priority_weight": 0, 00:22:18.934 "high_priority_weight": 0, 00:22:18.934 "nvme_adminq_poll_period_us": 10000, 00:22:18.934 "nvme_ioq_poll_period_us": 0, 00:22:18.934 "io_queue_requests": 0, 00:22:18.934 "delay_cmd_submit": true, 00:22:18.934 "bdev_retry_count": 3, 00:22:18.934 "transport_ack_timeout": 0, 00:22:18.934 "ctrlr_loss_timeout_sec": 0, 00:22:18.934 "reconnect_delay_sec": 0, 00:22:18.934 "fast_io_fail_timeout_sec": 0, 00:22:18.934 "generate_uuids": false, 00:22:18.934 "transport_tos": 0, 00:22:18.934 "io_path_stat": false, 00:22:18.934 "allow_accel_sequence": false 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_nvme_set_hotplug", 00:22:18.934 "params": { 00:22:18.934 "period_us": 100000, 00:22:18.934 "enable": false 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_malloc_create", 00:22:18.934 "params": { 00:22:18.934 "name": "malloc0", 00:22:18.934 "num_blocks": 8192, 00:22:18.934 "block_size": 4096, 00:22:18.934 "physical_block_size": 4096, 00:22:18.934 "uuid": "1d1fe755-915f-4e02-93bc-dc3e418cd4c2", 00:22:18.934 "optimal_io_boundary": 0 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "bdev_wait_for_examine" 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "nbd", 00:22:18.934 "config": [] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "scheduler", 00:22:18.934 "config": [ 00:22:18.934 { 00:22:18.934 "method": "framework_set_scheduler", 00:22:18.934 "params": { 00:22:18.934 "name": "static" 00:22:18.934 } 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "subsystem": "nvmf", 00:22:18.934 "config": [ 00:22:18.934 { 00:22:18.934 "method": "nvmf_set_config", 00:22:18.934 "params": { 00:22:18.934 "discovery_filter": "match_any", 00:22:18.934 "admin_cmd_passthru": { 00:22:18.934 "identify_ctrlr": false 00:22:18.934 } 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_set_max_subsystems", 00:22:18.934 "params": { 00:22:18.934 "max_subsystems": 1024 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_set_crdt", 00:22:18.934 "params": { 00:22:18.934 "crdt1": 0, 00:22:18.934 "crdt2": 0, 00:22:18.934 "crdt3": 0 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_create_transport", 00:22:18.934 "params": { 00:22:18.934 "trtype": "TCP", 00:22:18.934 "max_queue_depth": 128, 00:22:18.934 "max_io_qpairs_per_ctrlr": 127, 00:22:18.934 "in_capsule_data_size": 4096, 00:22:18.934 "max_io_size": 131072, 00:22:18.934 "io_unit_size": 131072, 00:22:18.934 "max_aq_depth": 128, 00:22:18.934 "num_shared_buffers": 511, 00:22:18.934 "buf_cache_size": 4294967295, 00:22:18.934 "dif_insert_or_strip": false, 00:22:18.934 "zcopy": false, 00:22:18.934 "c2h_success": false, 00:22:18.934 "sock_priority": 0, 00:22:18.934 "abort_timeout_sec": 1 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_create_subsystem", 00:22:18.934 "params": { 00:22:18.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.934 "allow_any_host": false, 00:22:18.934 "serial_number": "SPDK00000000000001", 00:22:18.934 "model_number": "SPDK bdev Controller", 00:22:18.934 "max_namespaces": 10, 00:22:18.934 "min_cntlid": 1, 00:22:18.934 "max_cntlid": 65519, 00:22:18.934 "ana_reporting": false 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_subsystem_add_host", 00:22:18.934 "params": { 00:22:18.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.934 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.934 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_subsystem_add_ns", 00:22:18.934 "params": { 00:22:18.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.934 "namespace": { 00:22:18.934 "nsid": 1, 00:22:18.934 "bdev_name": "malloc0", 00:22:18.934 "nguid": "1D1FE755915F4E0293BCDC3E418CD4C2", 00:22:18.934 "uuid": "1d1fe755-915f-4e02-93bc-dc3e418cd4c2" 00:22:18.934 } 00:22:18.934 } 00:22:18.934 }, 00:22:18.934 { 00:22:18.934 "method": "nvmf_subsystem_add_listener", 00:22:18.934 "params": { 00:22:18.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.934 "listen_address": { 00:22:18.934 "trtype": "TCP", 00:22:18.934 "adrfam": "IPv4", 00:22:18.934 "traddr": "10.0.0.2", 00:22:18.934 "trsvcid": "4420" 00:22:18.934 }, 00:22:18.934 "secure_channel": true 00:22:18.934 } 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 } 00:22:18.934 ] 00:22:18.934 }' 00:22:18.934 13:33:16 -- nvmf/common.sh@469 -- # nvmfpid=1014666 00:22:18.934 13:33:16 -- nvmf/common.sh@470 -- # waitforlisten 1014666 00:22:18.935 13:33:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:18.935 13:33:16 -- common/autotest_common.sh@819 -- # '[' -z 1014666 ']' 00:22:18.935 13:33:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.935 13:33:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:18.935 13:33:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.935 13:33:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:18.935 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 [2024-07-26 13:33:16.332343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:18.935 [2024-07-26 13:33:16.332394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.935 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.196 [2024-07-26 13:33:16.413881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.196 [2024-07-26 13:33:16.440138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.196 [2024-07-26 13:33:16.440242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.196 [2024-07-26 13:33:16.440248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.196 [2024-07-26 13:33:16.440253] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.196 [2024-07-26 13:33:16.440265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.196 [2024-07-26 13:33:16.609553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.196 [2024-07-26 13:33:16.641575] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.196 [2024-07-26 13:33:16.641768] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.769 13:33:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.769 13:33:17 -- common/autotest_common.sh@852 -- # return 0 00:22:19.769 13:33:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:19.769 13:33:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:19.769 13:33:17 -- common/autotest_common.sh@10 -- # set +x 00:22:19.769 13:33:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.769 13:33:17 -- target/tls.sh@216 -- # bdevperf_pid=1014730 00:22:19.769 13:33:17 -- target/tls.sh@217 -- # waitforlisten 1014730 /var/tmp/bdevperf.sock 00:22:19.769 13:33:17 -- common/autotest_common.sh@819 -- # '[' -z 1014730 ']' 00:22:19.770 13:33:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.770 13:33:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.770 13:33:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.770 13:33:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.770 13:33:17 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:19.770 13:33:17 -- common/autotest_common.sh@10 -- # set +x 00:22:19.770 13:33:17 -- target/tls.sh@213 -- # echo '{ 00:22:19.770 "subsystems": [ 00:22:19.770 { 00:22:19.770 "subsystem": "iobuf", 00:22:19.770 "config": [ 00:22:19.770 { 00:22:19.770 "method": "iobuf_set_options", 00:22:19.770 "params": { 00:22:19.770 "small_pool_count": 8192, 00:22:19.770 "large_pool_count": 1024, 00:22:19.770 "small_bufsize": 8192, 00:22:19.770 "large_bufsize": 135168 00:22:19.770 } 00:22:19.770 } 00:22:19.770 ] 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "subsystem": "sock", 00:22:19.770 "config": [ 00:22:19.770 { 00:22:19.770 "method": "sock_impl_set_options", 00:22:19.770 "params": { 00:22:19.770 "impl_name": "posix", 00:22:19.770 "recv_buf_size": 2097152, 00:22:19.770 "send_buf_size": 2097152, 00:22:19.770 "enable_recv_pipe": true, 00:22:19.770 "enable_quickack": false, 00:22:19.770 "enable_placement_id": 0, 00:22:19.770 "enable_zerocopy_send_server": true, 00:22:19.770 "enable_zerocopy_send_client": false, 00:22:19.770 "zerocopy_threshold": 0, 00:22:19.770 "tls_version": 0, 00:22:19.770 "enable_ktls": false 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "sock_impl_set_options", 00:22:19.770 "params": { 00:22:19.770 "impl_name": "ssl", 00:22:19.770 "recv_buf_size": 4096, 00:22:19.770 "send_buf_size": 4096, 00:22:19.770 "enable_recv_pipe": true, 00:22:19.770 "enable_quickack": false, 00:22:19.770 "enable_placement_id": 0, 00:22:19.770 "enable_zerocopy_send_server": true, 00:22:19.770 "enable_zerocopy_send_client": false, 00:22:19.770 "zerocopy_threshold": 0, 00:22:19.770 "tls_version": 0, 00:22:19.770 "enable_ktls": false 00:22:19.770 } 00:22:19.770 } 00:22:19.770 ] 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "subsystem": "vmd", 00:22:19.770 "config": [] 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "subsystem": "accel", 00:22:19.770 "config": [ 00:22:19.770 { 00:22:19.770 "method": "accel_set_options", 00:22:19.770 "params": { 00:22:19.770 "small_cache_size": 128, 00:22:19.770 "large_cache_size": 16, 00:22:19.770 "task_count": 2048, 00:22:19.770 "sequence_count": 2048, 00:22:19.770 "buf_count": 2048 00:22:19.770 } 00:22:19.770 } 00:22:19.770 ] 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "subsystem": "bdev", 00:22:19.770 "config": [ 00:22:19.770 { 00:22:19.770 "method": "bdev_set_options", 00:22:19.770 "params": { 00:22:19.770 "bdev_io_pool_size": 65535, 00:22:19.770 "bdev_io_cache_size": 256, 00:22:19.770 "bdev_auto_examine": true, 00:22:19.770 "iobuf_small_cache_size": 128, 00:22:19.770 "iobuf_large_cache_size": 16 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_raid_set_options", 00:22:19.770 "params": { 00:22:19.770 "process_window_size_kb": 1024 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_iscsi_set_options", 00:22:19.770 "params": { 00:22:19.770 "timeout_sec": 30 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_nvme_set_options", 00:22:19.770 "params": { 00:22:19.770 "action_on_timeout": "none", 00:22:19.770 "timeout_us": 0, 00:22:19.770 "timeout_admin_us": 0, 00:22:19.770 "keep_alive_timeout_ms": 10000, 00:22:19.770 "transport_retry_count": 4, 00:22:19.770 "arbitration_burst": 0, 00:22:19.770 "low_priority_weight": 0, 00:22:19.770 "medium_priority_weight": 0, 00:22:19.770 "high_priority_weight": 0, 00:22:19.770 "nvme_adminq_poll_period_us": 10000, 00:22:19.770 "nvme_ioq_poll_period_us": 0, 00:22:19.770 "io_queue_requests": 512, 00:22:19.770 "delay_cmd_submit": true, 00:22:19.770 "bdev_retry_count": 3, 00:22:19.770 "transport_ack_timeout": 0, 00:22:19.770 "ctrlr_loss_timeout_sec": 0, 00:22:19.770 "reconnect_delay_sec": 0, 00:22:19.770 "fast_io_fail_timeout_sec": 0, 00:22:19.770 "generate_uuids": false, 00:22:19.770 "transport_tos": 0, 00:22:19.770 "io_path_stat": false, 00:22:19.770 "allow_accel_sequence": false 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_nvme_attach_controller", 00:22:19.770 "params": { 00:22:19.770 "name": "TLSTEST", 00:22:19.770 "trtype": "TCP", 00:22:19.770 "adrfam": "IPv4", 00:22:19.770 "traddr": "10.0.0.2", 00:22:19.770 "trsvcid": "4420", 00:22:19.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.770 "prchk_reftag": false, 00:22:19.770 "prchk_guard": false, 00:22:19.770 "ctrlr_loss_timeout_sec": 0, 00:22:19.770 "reconnect_delay_sec": 0, 00:22:19.770 "fast_io_fail_timeout_sec": 0, 00:22:19.770 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:19.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:19.770 "hdgst": false, 00:22:19.770 "ddgst": false 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_nvme_set_hotplug", 00:22:19.770 "params": { 00:22:19.770 "period_us": 100000, 00:22:19.770 "enable": false 00:22:19.770 } 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "method": "bdev_wait_for_examine" 00:22:19.770 } 00:22:19.770 ] 00:22:19.770 }, 00:22:19.770 { 00:22:19.770 "subsystem": "nbd", 00:22:19.770 "config": [] 00:22:19.770 } 00:22:19.770 ] 00:22:19.770 }' 00:22:19.770 [2024-07-26 13:33:17.167316] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:19.770 [2024-07-26 13:33:17.167367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014730 ] 00:22:19.770 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.770 [2024-07-26 13:33:17.216165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.770 [2024-07-26 13:33:17.242659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.032 [2024-07-26 13:33:17.353090] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.604 13:33:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.604 13:33:17 -- common/autotest_common.sh@852 -- # return 0 00:22:20.604 13:33:17 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:20.604 Running I/O for 10 seconds... 00:22:30.661 00:22:30.661 Latency(us) 00:22:30.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:30.662 Verification LBA range: start 0x0 length 0x2000 00:22:30.662 TLSTESTn1 : 10.07 1487.46 5.81 0.00 0.00 85867.43 9448.11 93934.93 00:22:30.662 =================================================================================================================== 00:22:30.662 Total : 1487.46 5.81 0.00 0.00 85867.43 9448.11 93934.93 00:22:30.662 0 00:22:30.662 13:33:28 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.662 13:33:28 -- target/tls.sh@223 -- # killprocess 1014730 00:22:30.662 13:33:28 -- common/autotest_common.sh@926 -- # '[' -z 1014730 ']' 00:22:30.662 13:33:28 -- common/autotest_common.sh@930 -- # kill -0 1014730 00:22:30.662 13:33:28 -- common/autotest_common.sh@931 -- # uname 00:22:30.662 13:33:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:30.662 13:33:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1014730 00:22:30.922 13:33:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:30.922 13:33:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:30.922 13:33:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1014730' 00:22:30.922 killing process with pid 1014730 00:22:30.922 13:33:28 -- common/autotest_common.sh@945 -- # kill 1014730 00:22:30.922 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.922 00:22:30.922 Latency(us) 00:22:30.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.922 =================================================================================================================== 00:22:30.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.922 13:33:28 -- common/autotest_common.sh@950 -- # wait 1014730 00:22:30.922 13:33:28 -- target/tls.sh@224 -- # killprocess 1014666 00:22:30.922 13:33:28 -- common/autotest_common.sh@926 -- # '[' -z 1014666 ']' 00:22:30.922 13:33:28 -- common/autotest_common.sh@930 -- # kill -0 1014666 00:22:30.922 13:33:28 -- common/autotest_common.sh@931 -- # uname 00:22:30.922 13:33:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:30.922 13:33:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1014666 00:22:30.922 13:33:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:30.922 13:33:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:30.922 13:33:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1014666' 00:22:30.922 killing process with pid 1014666 00:22:30.922 13:33:28 -- common/autotest_common.sh@945 -- # kill 1014666 00:22:30.922 13:33:28 -- common/autotest_common.sh@950 -- # wait 1014666 00:22:31.184 13:33:28 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:31.184 13:33:28 -- target/tls.sh@227 -- # cleanup 00:22:31.184 13:33:28 -- target/tls.sh@15 -- # process_shm --id 0 00:22:31.184 13:33:28 -- common/autotest_common.sh@796 -- # type=--id 00:22:31.184 13:33:28 -- common/autotest_common.sh@797 -- # id=0 00:22:31.184 13:33:28 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:31.184 13:33:28 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:31.184 13:33:28 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:31.184 13:33:28 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:31.184 13:33:28 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:31.184 13:33:28 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:31.184 nvmf_trace.0 00:22:31.184 13:33:28 -- common/autotest_common.sh@811 -- # return 0 00:22:31.184 13:33:28 -- target/tls.sh@16 -- # killprocess 1014730 00:22:31.184 13:33:28 -- common/autotest_common.sh@926 -- # '[' -z 1014730 ']' 00:22:31.184 13:33:28 -- common/autotest_common.sh@930 -- # kill -0 1014730 00:22:31.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1014730) - No such process 00:22:31.184 13:33:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1014730 is not found' 00:22:31.184 Process with pid 1014730 is not found 00:22:31.184 13:33:28 -- target/tls.sh@17 -- # nvmftestfini 00:22:31.184 13:33:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:31.184 13:33:28 -- nvmf/common.sh@116 -- # sync 00:22:31.184 13:33:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:31.184 13:33:28 -- nvmf/common.sh@119 -- # set +e 00:22:31.184 13:33:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:31.184 13:33:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:31.184 rmmod nvme_tcp 00:22:31.184 rmmod nvme_fabrics 00:22:31.184 rmmod nvme_keyring 00:22:31.184 13:33:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:31.184 13:33:28 -- nvmf/common.sh@123 -- # set -e 00:22:31.184 13:33:28 -- nvmf/common.sh@124 -- # return 0 00:22:31.184 13:33:28 -- nvmf/common.sh@477 -- # '[' -n 1014666 ']' 00:22:31.184 13:33:28 -- nvmf/common.sh@478 -- # killprocess 1014666 00:22:31.184 13:33:28 -- common/autotest_common.sh@926 -- # '[' -z 1014666 ']' 00:22:31.184 13:33:28 -- common/autotest_common.sh@930 -- # kill -0 1014666 00:22:31.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1014666) - No such process 00:22:31.184 13:33:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1014666 is not found' 00:22:31.184 Process with pid 1014666 is not found 00:22:31.184 13:33:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:31.184 13:33:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:31.184 13:33:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:31.184 13:33:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.184 13:33:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:31.184 13:33:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.184 13:33:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.184 13:33:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.735 13:33:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:33.735 13:33:30 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:33.735 00:22:33.735 real 1m11.215s 00:22:33.735 user 1m41.110s 00:22:33.735 sys 0m28.898s 00:22:33.735 13:33:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.735 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:22:33.735 ************************************ 00:22:33.735 END TEST nvmf_tls 00:22:33.735 ************************************ 00:22:33.735 13:33:30 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:33.735 13:33:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:33.735 13:33:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:33.735 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:22:33.735 ************************************ 00:22:33.735 START TEST nvmf_fips 00:22:33.735 ************************************ 00:22:33.735 13:33:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:33.735 * Looking for test storage... 00:22:33.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:33.735 13:33:30 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.735 13:33:30 -- nvmf/common.sh@7 -- # uname -s 00:22:33.735 13:33:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.735 13:33:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.735 13:33:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.735 13:33:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.735 13:33:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.735 13:33:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.735 13:33:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.735 13:33:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.735 13:33:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.736 13:33:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.736 13:33:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.736 13:33:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.736 13:33:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.736 13:33:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.736 13:33:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.736 13:33:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.736 13:33:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.736 13:33:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.736 13:33:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.736 13:33:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.736 13:33:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.736 13:33:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.736 13:33:30 -- paths/export.sh@5 -- # export PATH 00:22:33.736 13:33:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.736 13:33:30 -- nvmf/common.sh@46 -- # : 0 00:22:33.736 13:33:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:33.736 13:33:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:33.736 13:33:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:33.736 13:33:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.736 13:33:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.736 13:33:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:33.736 13:33:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:33.736 13:33:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:33.736 13:33:30 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.736 13:33:30 -- fips/fips.sh@89 -- # check_openssl_version 00:22:33.736 13:33:30 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:33.736 13:33:30 -- fips/fips.sh@85 -- # openssl version 00:22:33.736 13:33:30 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:33.736 13:33:30 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:33.736 13:33:30 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:33.736 13:33:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:33.736 13:33:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:33.736 13:33:30 -- scripts/common.sh@335 -- # IFS=.-: 00:22:33.736 13:33:30 -- scripts/common.sh@335 -- # read -ra ver1 00:22:33.736 13:33:30 -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.736 13:33:30 -- scripts/common.sh@336 -- # read -ra ver2 00:22:33.736 13:33:30 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:33.736 13:33:30 -- scripts/common.sh@339 -- # ver1_l=3 00:22:33.736 13:33:30 -- scripts/common.sh@340 -- # ver2_l=3 00:22:33.736 13:33:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:33.736 13:33:30 -- scripts/common.sh@343 -- # case "$op" in 00:22:33.736 13:33:30 -- scripts/common.sh@347 -- # : 1 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # decimal 3 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=3 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 3 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # decimal 3 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=3 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 3 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:33.736 13:33:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:33.736 13:33:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v++ )) 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # decimal 0 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=0 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 0 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # ver1[v]=0 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # decimal 0 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=0 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 0 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:33.736 13:33:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:33.736 13:33:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v++ )) 00:22:33.736 13:33:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # decimal 9 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=9 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 9 00:22:33.736 13:33:30 -- scripts/common.sh@364 -- # ver1[v]=9 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # decimal 0 00:22:33.736 13:33:30 -- scripts/common.sh@352 -- # local d=0 00:22:33.736 13:33:30 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.736 13:33:30 -- scripts/common.sh@354 -- # echo 0 00:22:33.736 13:33:30 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:33.736 13:33:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:33.736 13:33:30 -- scripts/common.sh@366 -- # return 0 00:22:33.736 13:33:30 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:33.736 13:33:30 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:33.736 13:33:30 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:33.736 13:33:30 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:33.736 13:33:30 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:33.736 13:33:30 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:33.736 13:33:30 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:33.736 13:33:30 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:33.736 13:33:30 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:33.736 13:33:30 -- fips/fips.sh@114 -- # build_openssl_config 00:22:33.736 13:33:30 -- fips/fips.sh@37 -- # cat 00:22:33.736 13:33:30 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:33.736 13:33:30 -- fips/fips.sh@58 -- # cat - 00:22:33.736 13:33:30 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:33.736 13:33:30 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:33.736 13:33:30 -- fips/fips.sh@117 -- # mapfile -t providers 00:22:33.736 13:33:30 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:22:33.736 13:33:30 -- fips/fips.sh@117 -- # openssl list -providers 00:22:33.736 13:33:30 -- fips/fips.sh@117 -- # grep name 00:22:33.736 13:33:30 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:33.736 13:33:30 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:33.736 13:33:30 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:33.736 13:33:30 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:33.736 13:33:30 -- common/autotest_common.sh@640 -- # local es=0 00:22:33.736 13:33:30 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:33.736 13:33:30 -- fips/fips.sh@128 -- # : 00:22:33.736 13:33:30 -- common/autotest_common.sh@628 -- # local arg=openssl 00:22:33.736 13:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.736 13:33:30 -- common/autotest_common.sh@632 -- # type -t openssl 00:22:33.736 13:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.736 13:33:30 -- common/autotest_common.sh@634 -- # type -P openssl 00:22:33.736 13:33:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:33.736 13:33:30 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:22:33.736 13:33:30 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:22:33.736 13:33:30 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:22:33.736 Error setting digest 00:22:33.736 005261B8B87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:33.736 005261B8B87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:33.736 13:33:31 -- common/autotest_common.sh@643 -- # es=1 00:22:33.736 13:33:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:33.736 13:33:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:33.736 13:33:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:33.736 13:33:31 -- fips/fips.sh@131 -- # nvmftestinit 00:22:33.737 13:33:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:33.737 13:33:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.737 13:33:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:33.737 13:33:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:33.737 13:33:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:33.737 13:33:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.737 13:33:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.737 13:33:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.737 13:33:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:33.737 13:33:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:33.737 13:33:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:33.737 13:33:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.334 13:33:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:40.334 13:33:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:40.334 13:33:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:40.334 13:33:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:40.334 13:33:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:40.334 13:33:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:40.334 13:33:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:40.334 13:33:37 -- nvmf/common.sh@294 -- # net_devs=() 00:22:40.334 13:33:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:40.334 13:33:37 -- nvmf/common.sh@295 -- # e810=() 00:22:40.334 13:33:37 -- nvmf/common.sh@295 -- # local -ga e810 00:22:40.334 13:33:37 -- nvmf/common.sh@296 -- # x722=() 00:22:40.334 13:33:37 -- nvmf/common.sh@296 -- # local -ga x722 00:22:40.334 13:33:37 -- nvmf/common.sh@297 -- # mlx=() 00:22:40.334 13:33:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:40.334 13:33:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.334 13:33:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:40.334 13:33:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:40.334 13:33:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:40.334 13:33:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.334 13:33:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.334 13:33:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.334 13:33:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.334 13:33:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:40.334 13:33:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:40.334 13:33:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.334 13:33:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.334 13:33:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.334 13:33:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.334 13:33:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.334 13:33:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.334 13:33:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.334 13:33:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.334 13:33:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.334 13:33:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.334 13:33:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.335 13:33:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.335 13:33:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:40.335 13:33:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:40.335 13:33:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:40.335 13:33:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:40.335 13:33:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:40.335 13:33:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.335 13:33:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.335 13:33:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.335 13:33:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:40.335 13:33:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.335 13:33:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.335 13:33:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:40.335 13:33:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.335 13:33:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.335 13:33:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:40.335 13:33:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:40.335 13:33:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.335 13:33:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.335 13:33:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.335 13:33:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.335 13:33:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:40.335 13:33:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.335 13:33:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.335 13:33:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.335 13:33:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:40.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:22:40.335 00:22:40.335 --- 10.0.0.2 ping statistics --- 00:22:40.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.335 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:22:40.335 13:33:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.519 ms 00:22:40.335 00:22:40.335 --- 10.0.0.1 ping statistics --- 00:22:40.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.335 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:22:40.335 13:33:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.335 13:33:37 -- nvmf/common.sh@410 -- # return 0 00:22:40.335 13:33:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:40.335 13:33:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.335 13:33:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:40.335 13:33:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:40.335 13:33:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.335 13:33:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:40.335 13:33:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:40.335 13:33:37 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:40.335 13:33:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:40.335 13:33:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:40.335 13:33:37 -- common/autotest_common.sh@10 -- # set +x 00:22:40.335 13:33:37 -- nvmf/common.sh@469 -- # nvmfpid=1021095 00:22:40.335 13:33:37 -- nvmf/common.sh@470 -- # waitforlisten 1021095 00:22:40.335 13:33:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.335 13:33:37 -- common/autotest_common.sh@819 -- # '[' -z 1021095 ']' 00:22:40.335 13:33:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.335 13:33:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:40.335 13:33:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.335 13:33:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:40.335 13:33:37 -- common/autotest_common.sh@10 -- # set +x 00:22:40.335 [2024-07-26 13:33:37.787661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:40.335 [2024-07-26 13:33:37.787716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.597 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.597 [2024-07-26 13:33:37.869986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.598 [2024-07-26 13:33:37.912002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:40.598 [2024-07-26 13:33:37.912150] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.598 [2024-07-26 13:33:37.912159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.598 [2024-07-26 13:33:37.912167] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.598 [2024-07-26 13:33:37.912193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.173 13:33:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:41.173 13:33:38 -- common/autotest_common.sh@852 -- # return 0 00:22:41.173 13:33:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:41.173 13:33:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:41.173 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:22:41.173 13:33:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.173 13:33:38 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:41.173 13:33:38 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.173 13:33:38 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.173 13:33:38 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.173 13:33:38 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.173 13:33:38 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.173 13:33:38 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.173 13:33:38 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.435 [2024-07-26 13:33:38.715178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.435 [2024-07-26 13:33:38.731181] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.435 [2024-07-26 13:33:38.731471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.435 malloc0 00:22:41.435 13:33:38 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.435 13:33:38 -- fips/fips.sh@148 -- # bdevperf_pid=1021448 00:22:41.435 13:33:38 -- fips/fips.sh@149 -- # waitforlisten 1021448 /var/tmp/bdevperf.sock 00:22:41.435 13:33:38 -- common/autotest_common.sh@819 -- # '[' -z 1021448 ']' 00:22:41.435 13:33:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.435 13:33:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.435 13:33:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.435 13:33:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.435 13:33:38 -- common/autotest_common.sh@10 -- # set +x 00:22:41.435 13:33:38 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.435 [2024-07-26 13:33:38.859851] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:41.435 [2024-07-26 13:33:38.859922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021448 ] 00:22:41.435 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.696 [2024-07-26 13:33:38.914807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.696 [2024-07-26 13:33:38.949781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.269 13:33:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.269 13:33:39 -- common/autotest_common.sh@852 -- # return 0 00:22:42.269 13:33:39 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:42.530 [2024-07-26 13:33:39.749354] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.530 TLSTESTn1 00:22:42.530 13:33:39 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.530 Running I/O for 10 seconds... 00:22:52.548 00:22:52.548 Latency(us) 00:22:52.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.548 Verification LBA range: start 0x0 length 0x2000 00:22:52.548 TLSTESTn1 : 10.06 1502.76 5.87 0.00 0.00 84984.70 6990.51 91750.40 00:22:52.548 =================================================================================================================== 00:22:52.548 Total : 1502.76 5.87 0.00 0.00 84984.70 6990.51 91750.40 00:22:52.548 0 00:22:52.809 13:33:50 -- fips/fips.sh@1 -- # cleanup 00:22:52.809 13:33:50 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:52.809 13:33:50 -- common/autotest_common.sh@796 -- # type=--id 00:22:52.809 13:33:50 -- common/autotest_common.sh@797 -- # id=0 00:22:52.809 13:33:50 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:52.809 13:33:50 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:52.809 13:33:50 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:52.809 13:33:50 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:52.809 13:33:50 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:52.809 13:33:50 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:52.809 nvmf_trace.0 00:22:52.809 13:33:50 -- common/autotest_common.sh@811 -- # return 0 00:22:52.809 13:33:50 -- fips/fips.sh@16 -- # killprocess 1021448 00:22:52.809 13:33:50 -- common/autotest_common.sh@926 -- # '[' -z 1021448 ']' 00:22:52.809 13:33:50 -- common/autotest_common.sh@930 -- # kill -0 1021448 00:22:52.809 13:33:50 -- common/autotest_common.sh@931 -- # uname 00:22:52.809 13:33:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:52.809 13:33:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1021448 00:22:52.809 13:33:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:52.809 13:33:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:52.809 13:33:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1021448' 00:22:52.809 killing process with pid 1021448 00:22:52.809 13:33:50 -- common/autotest_common.sh@945 -- # kill 1021448 00:22:52.809 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.809 00:22:52.809 Latency(us) 00:22:52.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.809 =================================================================================================================== 00:22:52.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.809 13:33:50 -- common/autotest_common.sh@950 -- # wait 1021448 00:22:52.809 13:33:50 -- fips/fips.sh@17 -- # nvmftestfini 00:22:52.809 13:33:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:52.809 13:33:50 -- nvmf/common.sh@116 -- # sync 00:22:52.809 13:33:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:52.809 13:33:50 -- nvmf/common.sh@119 -- # set +e 00:22:52.809 13:33:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:52.809 13:33:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:52.809 rmmod nvme_tcp 00:22:53.071 rmmod nvme_fabrics 00:22:53.071 rmmod nvme_keyring 00:22:53.071 13:33:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.071 13:33:50 -- nvmf/common.sh@123 -- # set -e 00:22:53.071 13:33:50 -- nvmf/common.sh@124 -- # return 0 00:22:53.071 13:33:50 -- nvmf/common.sh@477 -- # '[' -n 1021095 ']' 00:22:53.071 13:33:50 -- nvmf/common.sh@478 -- # killprocess 1021095 00:22:53.071 13:33:50 -- common/autotest_common.sh@926 -- # '[' -z 1021095 ']' 00:22:53.071 13:33:50 -- common/autotest_common.sh@930 -- # kill -0 1021095 00:22:53.071 13:33:50 -- common/autotest_common.sh@931 -- # uname 00:22:53.071 13:33:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:53.071 13:33:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1021095 00:22:53.071 13:33:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:53.071 13:33:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:53.071 13:33:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1021095' 00:22:53.071 killing process with pid 1021095 00:22:53.071 13:33:50 -- common/autotest_common.sh@945 -- # kill 1021095 00:22:53.071 13:33:50 -- common/autotest_common.sh@950 -- # wait 1021095 00:22:53.071 13:33:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.071 13:33:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.071 13:33:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.071 13:33:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.071 13:33:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.071 13:33:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.071 13:33:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.071 13:33:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.693 13:33:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:55.693 13:33:52 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:55.693 00:22:55.693 real 0m21.885s 00:22:55.693 user 0m21.471s 00:22:55.693 sys 0m10.838s 00:22:55.693 13:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.693 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.693 ************************************ 00:22:55.693 END TEST nvmf_fips 00:22:55.693 ************************************ 00:22:55.693 13:33:52 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:55.693 13:33:52 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:55.693 13:33:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:55.693 13:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.693 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.693 ************************************ 00:22:55.693 START TEST nvmf_fuzz 00:22:55.693 ************************************ 00:22:55.693 13:33:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:55.693 * Looking for test storage... 00:22:55.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.693 13:33:52 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.693 13:33:52 -- nvmf/common.sh@7 -- # uname -s 00:22:55.693 13:33:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.693 13:33:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.693 13:33:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.693 13:33:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.693 13:33:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.693 13:33:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.693 13:33:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.693 13:33:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.693 13:33:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.693 13:33:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.693 13:33:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.693 13:33:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.693 13:33:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.693 13:33:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.693 13:33:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.693 13:33:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.693 13:33:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.693 13:33:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.693 13:33:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.693 13:33:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.693 13:33:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.694 13:33:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.694 13:33:52 -- paths/export.sh@5 -- # export PATH 00:22:55.694 13:33:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.694 13:33:52 -- nvmf/common.sh@46 -- # : 0 00:22:55.694 13:33:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:55.694 13:33:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:55.694 13:33:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:55.694 13:33:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.694 13:33:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.694 13:33:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:55.694 13:33:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:55.694 13:33:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:55.694 13:33:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:55.694 13:33:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:55.694 13:33:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.694 13:33:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:55.694 13:33:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:55.694 13:33:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:55.694 13:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.694 13:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.694 13:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.694 13:33:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:55.694 13:33:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:55.694 13:33:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:55.694 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:23:02.310 13:33:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:02.310 13:33:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:02.310 13:33:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:02.310 13:33:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:02.310 13:33:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:02.310 13:33:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:02.310 13:33:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:02.310 13:33:59 -- nvmf/common.sh@294 -- # net_devs=() 00:23:02.310 13:33:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:02.310 13:33:59 -- nvmf/common.sh@295 -- # e810=() 00:23:02.310 13:33:59 -- nvmf/common.sh@295 -- # local -ga e810 00:23:02.310 13:33:59 -- nvmf/common.sh@296 -- # x722=() 00:23:02.310 13:33:59 -- nvmf/common.sh@296 -- # local -ga x722 00:23:02.310 13:33:59 -- nvmf/common.sh@297 -- # mlx=() 00:23:02.310 13:33:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:02.310 13:33:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.310 13:33:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:02.310 13:33:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:02.310 13:33:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:02.310 13:33:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.310 13:33:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:02.310 13:33:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.310 13:33:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:02.310 13:33:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.310 13:33:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.310 13:33:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.310 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.310 13:33:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.310 13:33:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:02.310 13:33:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.310 13:33:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.310 13:33:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.310 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.310 13:33:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.310 13:33:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:02.310 13:33:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:02.310 13:33:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:02.310 13:33:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.310 13:33:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.310 13:33:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.310 13:33:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:02.310 13:33:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.310 13:33:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.310 13:33:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:02.311 13:33:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.311 13:33:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.311 13:33:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:02.311 13:33:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:02.311 13:33:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.311 13:33:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.311 13:33:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.311 13:33:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.311 13:33:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:02.311 13:33:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.311 13:33:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.311 13:33:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.572 13:33:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:02.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:23:02.573 00:23:02.573 --- 10.0.0.2 ping statistics --- 00:23:02.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.573 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:23:02.573 13:33:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:23:02.573 00:23:02.573 --- 10.0.0.1 ping statistics --- 00:23:02.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.573 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:23:02.573 13:33:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.573 13:33:59 -- nvmf/common.sh@410 -- # return 0 00:23:02.573 13:33:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:02.573 13:33:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.573 13:33:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:02.573 13:33:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:02.573 13:33:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.573 13:33:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:02.573 13:33:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:02.573 13:33:59 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1027809 00:23:02.573 13:33:59 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:02.573 13:33:59 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:02.573 13:33:59 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1027809 00:23:02.573 13:33:59 -- common/autotest_common.sh@819 -- # '[' -z 1027809 ']' 00:23:02.573 13:33:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.573 13:33:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.573 13:33:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.573 13:33:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.573 13:33:59 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 13:34:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:03.516 13:34:00 -- common/autotest_common.sh@852 -- # return 0 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.516 13:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.516 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 13:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:03.516 13:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.516 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 Malloc0 00:23:03.516 13:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.516 13:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.516 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 13:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.516 13:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.516 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 13:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.516 13:34:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.516 13:34:00 -- common/autotest_common.sh@10 -- # set +x 00:23:03.516 13:34:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:03.516 13:34:00 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:35.630 Fuzzing completed. Shutting down the fuzz application 00:23:35.630 00:23:35.630 Dumping successful admin opcodes: 00:23:35.630 8, 9, 10, 24, 00:23:35.630 Dumping successful io opcodes: 00:23:35.630 0, 9, 00:23:35.630 NS: 0x200003aeff00 I/O qp, Total commands completed: 933349, total successful commands: 5444, random_seed: 1559116160 00:23:35.631 NS: 0x200003aeff00 admin qp, Total commands completed: 117591, total successful commands: 963, random_seed: 3649433728 00:23:35.631 13:34:31 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:35.631 Fuzzing completed. Shutting down the fuzz application 00:23:35.631 00:23:35.631 Dumping successful admin opcodes: 00:23:35.631 24, 00:23:35.631 Dumping successful io opcodes: 00:23:35.631 00:23:35.631 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1344275783 00:23:35.631 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1344355087 00:23:35.631 13:34:32 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.631 13:34:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.631 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:23:35.631 13:34:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.631 13:34:32 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:35.631 13:34:32 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:35.631 13:34:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:35.631 13:34:32 -- nvmf/common.sh@116 -- # sync 00:23:35.631 13:34:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:35.631 13:34:32 -- nvmf/common.sh@119 -- # set +e 00:23:35.631 13:34:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:35.631 13:34:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:35.631 rmmod nvme_tcp 00:23:35.631 rmmod nvme_fabrics 00:23:35.631 rmmod nvme_keyring 00:23:35.631 13:34:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:35.631 13:34:32 -- nvmf/common.sh@123 -- # set -e 00:23:35.631 13:34:32 -- nvmf/common.sh@124 -- # return 0 00:23:35.631 13:34:32 -- nvmf/common.sh@477 -- # '[' -n 1027809 ']' 00:23:35.631 13:34:32 -- nvmf/common.sh@478 -- # killprocess 1027809 00:23:35.631 13:34:32 -- common/autotest_common.sh@926 -- # '[' -z 1027809 ']' 00:23:35.631 13:34:32 -- common/autotest_common.sh@930 -- # kill -0 1027809 00:23:35.631 13:34:32 -- common/autotest_common.sh@931 -- # uname 00:23:35.631 13:34:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:35.631 13:34:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1027809 00:23:35.631 13:34:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:35.631 13:34:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:35.631 13:34:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1027809' 00:23:35.631 killing process with pid 1027809 00:23:35.631 13:34:32 -- common/autotest_common.sh@945 -- # kill 1027809 00:23:35.631 13:34:32 -- common/autotest_common.sh@950 -- # wait 1027809 00:23:35.631 13:34:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:35.631 13:34:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:35.631 13:34:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:35.631 13:34:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.631 13:34:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:35.631 13:34:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.631 13:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.631 13:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.546 13:34:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:37.546 13:34:34 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:37.546 00:23:37.546 real 0m42.213s 00:23:37.546 user 0m55.172s 00:23:37.546 sys 0m16.165s 00:23:37.546 13:34:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:37.546 13:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:37.546 ************************************ 00:23:37.546 END TEST nvmf_fuzz 00:23:37.546 ************************************ 00:23:37.546 13:34:34 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:37.546 13:34:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:37.546 13:34:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:37.546 13:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:37.546 ************************************ 00:23:37.546 START TEST nvmf_multiconnection 00:23:37.546 ************************************ 00:23:37.546 13:34:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:37.546 * Looking for test storage... 00:23:37.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:37.546 13:34:34 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.546 13:34:34 -- nvmf/common.sh@7 -- # uname -s 00:23:37.546 13:34:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.546 13:34:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.546 13:34:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.546 13:34:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.546 13:34:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.546 13:34:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.546 13:34:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.546 13:34:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.546 13:34:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.546 13:34:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.546 13:34:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.546 13:34:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.546 13:34:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.546 13:34:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.546 13:34:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.546 13:34:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.546 13:34:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.546 13:34:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.546 13:34:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.546 13:34:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.546 13:34:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.546 13:34:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.546 13:34:35 -- paths/export.sh@5 -- # export PATH 00:23:37.546 13:34:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.546 13:34:35 -- nvmf/common.sh@46 -- # : 0 00:23:37.807 13:34:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:37.807 13:34:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:37.807 13:34:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:37.807 13:34:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.807 13:34:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.807 13:34:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:37.807 13:34:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:37.807 13:34:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:37.807 13:34:35 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.807 13:34:35 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.807 13:34:35 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:37.807 13:34:35 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:37.807 13:34:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:37.807 13:34:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.807 13:34:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:37.807 13:34:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:37.807 13:34:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:37.807 13:34:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.807 13:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.807 13:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.807 13:34:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:37.807 13:34:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:37.807 13:34:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:37.807 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:23:44.402 13:34:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:44.402 13:34:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:44.402 13:34:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:44.403 13:34:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:44.403 13:34:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:44.403 13:34:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:44.403 13:34:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:44.403 13:34:41 -- nvmf/common.sh@294 -- # net_devs=() 00:23:44.403 13:34:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:44.403 13:34:41 -- nvmf/common.sh@295 -- # e810=() 00:23:44.403 13:34:41 -- nvmf/common.sh@295 -- # local -ga e810 00:23:44.403 13:34:41 -- nvmf/common.sh@296 -- # x722=() 00:23:44.403 13:34:41 -- nvmf/common.sh@296 -- # local -ga x722 00:23:44.403 13:34:41 -- nvmf/common.sh@297 -- # mlx=() 00:23:44.403 13:34:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:44.403 13:34:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.403 13:34:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:44.403 13:34:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:44.403 13:34:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.403 13:34:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.403 13:34:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.403 13:34:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.403 13:34:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.403 13:34:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.403 13:34:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.403 13:34:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.403 13:34:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.403 13:34:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.403 13:34:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.403 13:34:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.403 13:34:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.403 13:34:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.403 13:34:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:44.403 13:34:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:44.403 13:34:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:44.403 13:34:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.403 13:34:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.403 13:34:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.403 13:34:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:44.403 13:34:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.403 13:34:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.403 13:34:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:44.403 13:34:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.403 13:34:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.403 13:34:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:44.403 13:34:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:44.403 13:34:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.403 13:34:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.665 13:34:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.665 13:34:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.665 13:34:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:44.665 13:34:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.665 13:34:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.665 13:34:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.665 13:34:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:44.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:23:44.665 00:23:44.665 --- 10.0.0.2 ping statistics --- 00:23:44.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.665 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:23:44.665 13:34:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:23:44.665 00:23:44.665 --- 10.0.0.1 ping statistics --- 00:23:44.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.665 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:23:44.665 13:34:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.665 13:34:42 -- nvmf/common.sh@410 -- # return 0 00:23:44.665 13:34:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:44.665 13:34:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.665 13:34:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:44.665 13:34:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:44.665 13:34:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.665 13:34:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:44.665 13:34:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:44.665 13:34:42 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:44.665 13:34:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:44.665 13:34:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:44.665 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:44.665 13:34:42 -- nvmf/common.sh@469 -- # nvmfpid=1038247 00:23:44.665 13:34:42 -- nvmf/common.sh@470 -- # waitforlisten 1038247 00:23:44.665 13:34:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.665 13:34:42 -- common/autotest_common.sh@819 -- # '[' -z 1038247 ']' 00:23:44.665 13:34:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.665 13:34:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:44.665 13:34:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.665 13:34:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:44.665 13:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:44.930 [2024-07-26 13:34:42.178825] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:44.930 [2024-07-26 13:34:42.178892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.930 [2024-07-26 13:34:42.268448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.930 [2024-07-26 13:34:42.306361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.930 [2024-07-26 13:34:42.306496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.930 [2024-07-26 13:34:42.306508] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.930 [2024-07-26 13:34:42.306518] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.930 [2024-07-26 13:34:42.306656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.930 [2024-07-26 13:34:42.306762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.930 [2024-07-26 13:34:42.306918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.930 [2024-07-26 13:34:42.306918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.565 13:34:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:45.565 13:34:43 -- common/autotest_common.sh@852 -- # return 0 00:23:45.565 13:34:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:45.565 13:34:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:45.565 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.826 13:34:43 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 [2024-07-26 13:34:43.055639] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.826 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 Malloc1 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 [2024-07-26 13:34:43.123034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.826 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 Malloc2 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.826 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 Malloc3 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.826 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 Malloc4 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:45.826 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.826 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:45.826 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:45.826 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.826 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 Malloc5 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.087 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 Malloc6 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:46.087 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.087 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.087 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.087 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.088 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 Malloc7 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.088 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 Malloc8 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.088 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 Malloc9 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.088 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.088 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.088 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:46.088 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.088 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 Malloc10 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.349 13:34:43 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 Malloc11 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:46.349 13:34:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:46.349 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 13:34:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:46.349 13:34:43 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:46.349 13:34:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:46.349 13:34:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:48.265 13:34:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:48.265 13:34:45 -- common/autotest_common.sh@1177 -- # local i=0 00:23:48.265 13:34:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.265 13:34:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:48.265 13:34:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:50.185 13:34:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:50.185 13:34:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:50.185 13:34:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:23:50.185 13:34:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:50.185 13:34:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.185 13:34:47 -- common/autotest_common.sh@1187 -- # return 0 00:23:50.185 13:34:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.186 13:34:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:51.571 13:34:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:51.571 13:34:48 -- common/autotest_common.sh@1177 -- # local i=0 00:23:51.571 13:34:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:51.571 13:34:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:51.571 13:34:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:53.485 13:34:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:53.485 13:34:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:53.485 13:34:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:53.485 13:34:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:53.485 13:34:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:53.485 13:34:50 -- common/autotest_common.sh@1187 -- # return 0 00:23:53.485 13:34:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.485 13:34:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:55.401 13:34:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:55.401 13:34:52 -- common/autotest_common.sh@1177 -- # local i=0 00:23:55.401 13:34:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.401 13:34:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:55.401 13:34:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:57.317 13:34:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:57.317 13:34:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:57.317 13:34:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:23:57.317 13:34:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:57.317 13:34:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.317 13:34:54 -- common/autotest_common.sh@1187 -- # return 0 00:23:57.317 13:34:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.317 13:34:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:58.706 13:34:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:58.706 13:34:56 -- common/autotest_common.sh@1177 -- # local i=0 00:23:58.706 13:34:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.706 13:34:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:58.706 13:34:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:01.253 13:34:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:01.253 13:34:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:01.253 13:34:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:01.253 13:34:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:01.253 13:34:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.253 13:34:58 -- common/autotest_common.sh@1187 -- # return 0 00:24:01.253 13:34:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.253 13:34:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:02.641 13:34:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:02.641 13:34:59 -- common/autotest_common.sh@1177 -- # local i=0 00:24:02.641 13:34:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.641 13:34:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:02.641 13:34:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:04.557 13:35:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:04.557 13:35:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:04.557 13:35:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:04.557 13:35:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:04.557 13:35:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.557 13:35:01 -- common/autotest_common.sh@1187 -- # return 0 00:24:04.557 13:35:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.557 13:35:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:06.000 13:35:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:06.000 13:35:03 -- common/autotest_common.sh@1177 -- # local i=0 00:24:06.000 13:35:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:06.000 13:35:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:06.000 13:35:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:08.546 13:35:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:08.546 13:35:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:08.546 13:35:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:08.546 13:35:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:08.546 13:35:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:08.546 13:35:05 -- common/autotest_common.sh@1187 -- # return 0 00:24:08.546 13:35:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.546 13:35:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:09.932 13:35:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:09.932 13:35:07 -- common/autotest_common.sh@1177 -- # local i=0 00:24:09.932 13:35:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.932 13:35:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:09.932 13:35:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:11.848 13:35:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:11.848 13:35:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:11.848 13:35:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:11.848 13:35:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:11.848 13:35:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.848 13:35:09 -- common/autotest_common.sh@1187 -- # return 0 00:24:11.848 13:35:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.848 13:35:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:13.765 13:35:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:13.765 13:35:10 -- common/autotest_common.sh@1177 -- # local i=0 00:24:13.765 13:35:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.765 13:35:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:13.765 13:35:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:15.681 13:35:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:15.681 13:35:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:15.681 13:35:13 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:15.681 13:35:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:15.681 13:35:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.681 13:35:13 -- common/autotest_common.sh@1187 -- # return 0 00:24:15.681 13:35:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.681 13:35:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:17.609 13:35:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:17.609 13:35:14 -- common/autotest_common.sh@1177 -- # local i=0 00:24:17.609 13:35:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:17.609 13:35:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:17.609 13:35:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:19.523 13:35:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:19.523 13:35:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:19.523 13:35:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:19.523 13:35:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:19.523 13:35:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.523 13:35:16 -- common/autotest_common.sh@1187 -- # return 0 00:24:19.523 13:35:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.523 13:35:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:21.438 13:35:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:21.438 13:35:18 -- common/autotest_common.sh@1177 -- # local i=0 00:24:21.438 13:35:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.438 13:35:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:21.438 13:35:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:23.354 13:35:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:23.354 13:35:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:23.354 13:35:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:23.354 13:35:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:23.354 13:35:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.354 13:35:20 -- common/autotest_common.sh@1187 -- # return 0 00:24:23.354 13:35:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.354 13:35:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:25.270 13:35:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:25.270 13:35:22 -- common/autotest_common.sh@1177 -- # local i=0 00:24:25.270 13:35:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.270 13:35:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:25.270 13:35:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:27.249 13:35:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:27.249 13:35:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:27.249 13:35:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:27.249 13:35:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:27.249 13:35:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.249 13:35:24 -- common/autotest_common.sh@1187 -- # return 0 00:24:27.249 13:35:24 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:27.249 [global] 00:24:27.249 thread=1 00:24:27.249 invalidate=1 00:24:27.249 rw=read 00:24:27.249 time_based=1 00:24:27.249 runtime=10 00:24:27.249 ioengine=libaio 00:24:27.249 direct=1 00:24:27.249 bs=262144 00:24:27.249 iodepth=64 00:24:27.249 norandommap=1 00:24:27.249 numjobs=1 00:24:27.249 00:24:27.249 [job0] 00:24:27.249 filename=/dev/nvme0n1 00:24:27.249 [job1] 00:24:27.249 filename=/dev/nvme10n1 00:24:27.249 [job2] 00:24:27.249 filename=/dev/nvme1n1 00:24:27.249 [job3] 00:24:27.249 filename=/dev/nvme2n1 00:24:27.249 [job4] 00:24:27.249 filename=/dev/nvme3n1 00:24:27.249 [job5] 00:24:27.249 filename=/dev/nvme4n1 00:24:27.249 [job6] 00:24:27.249 filename=/dev/nvme5n1 00:24:27.249 [job7] 00:24:27.249 filename=/dev/nvme6n1 00:24:27.249 [job8] 00:24:27.249 filename=/dev/nvme7n1 00:24:27.249 [job9] 00:24:27.249 filename=/dev/nvme8n1 00:24:27.249 [job10] 00:24:27.249 filename=/dev/nvme9n1 00:24:27.510 Could not set queue depth (nvme0n1) 00:24:27.510 Could not set queue depth (nvme10n1) 00:24:27.510 Could not set queue depth (nvme1n1) 00:24:27.510 Could not set queue depth (nvme2n1) 00:24:27.510 Could not set queue depth (nvme3n1) 00:24:27.510 Could not set queue depth (nvme4n1) 00:24:27.510 Could not set queue depth (nvme5n1) 00:24:27.510 Could not set queue depth (nvme6n1) 00:24:27.510 Could not set queue depth (nvme7n1) 00:24:27.510 Could not set queue depth (nvme8n1) 00:24:27.510 Could not set queue depth (nvme9n1) 00:24:27.772 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:27.772 fio-3.35 00:24:27.772 Starting 11 threads 00:24:40.015 00:24:40.015 job0: (groupid=0, jobs=1): err= 0: pid=1047096: Fri Jul 26 13:35:35 2024 00:24:40.015 read: IOPS=974, BW=244MiB/s (255MB/s)(2440MiB/10019msec) 00:24:40.015 slat (usec): min=6, max=116315, avg=987.39, stdev=3131.89 00:24:40.015 clat (msec): min=15, max=235, avg=64.63, stdev=31.80 00:24:40.015 lat (msec): min=15, max=258, avg=65.62, stdev=32.27 00:24:40.015 clat percentiles (msec): 00:24:40.015 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 45], 00:24:40.015 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 60], 00:24:40.015 | 70.00th=[ 66], 80.00th=[ 78], 90.00th=[ 120], 95.00th=[ 144], 00:24:40.015 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 180], 99.95th=[ 182], 00:24:40.015 | 99.99th=[ 236] 00:24:40.015 bw ( KiB/s): min=105984, max=455680, per=11.48%, avg=248205.70, stdev=91326.58, samples=20 00:24:40.015 iops : min= 414, max= 1780, avg=969.55, stdev=356.75, samples=20 00:24:40.015 lat (msec) : 20=0.14%, 50=37.51%, 100=50.55%, 250=11.79% 00:24:40.015 cpu : usr=0.39%, sys=3.37%, ctx=2184, majf=0, minf=4097 00:24:40.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:40.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.015 issued rwts: total=9759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.015 job1: (groupid=0, jobs=1): err= 0: pid=1047107: Fri Jul 26 13:35:35 2024 00:24:40.015 read: IOPS=929, BW=232MiB/s (244MB/s)(2332MiB/10029msec) 00:24:40.015 slat (usec): min=8, max=99801, avg=1016.39, stdev=3108.32 00:24:40.015 clat (msec): min=9, max=195, avg=67.70, stdev=26.93 00:24:40.015 lat (msec): min=9, max=195, avg=68.72, stdev=27.24 00:24:40.015 clat percentiles (msec): 00:24:40.015 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 46], 00:24:40.015 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 70], 00:24:40.015 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 129], 00:24:40.015 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 188], 00:24:40.015 | 99.99th=[ 197] 00:24:40.015 bw ( KiB/s): min=128000, max=363008, per=10.96%, avg=237114.00, stdev=70447.17, samples=20 00:24:40.015 iops : min= 500, max= 1418, avg=926.20, stdev=275.20, samples=20 00:24:40.015 lat (msec) : 10=0.04%, 20=0.46%, 50=31.44%, 100=56.40%, 250=11.66% 00:24:40.015 cpu : usr=0.38%, sys=3.20%, ctx=1955, majf=0, minf=3535 00:24:40.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:40.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.015 issued rwts: total=9326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.015 job2: (groupid=0, jobs=1): err= 0: pid=1047110: Fri Jul 26 13:35:35 2024 00:24:40.015 read: IOPS=722, BW=181MiB/s (190MB/s)(1823MiB/10088msec) 00:24:40.015 slat (usec): min=8, max=71110, avg=1307.14, stdev=3761.19 00:24:40.015 clat (msec): min=8, max=231, avg=87.11, stdev=27.88 00:24:40.015 lat (msec): min=8, max=231, avg=88.41, stdev=28.28 00:24:40.015 clat percentiles (msec): 00:24:40.015 | 1.00th=[ 41], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 65], 00:24:40.015 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 87], 00:24:40.015 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 133], 95.00th=[ 146], 00:24:40.015 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 207], 99.95th=[ 228], 00:24:40.015 | 99.99th=[ 232] 00:24:40.015 bw ( KiB/s): min=121344, max=262656, per=8.56%, avg=185088.00, stdev=38788.59, samples=20 00:24:40.015 iops : min= 474, max= 1026, avg=723.00, stdev=151.52, samples=20 00:24:40.015 lat (msec) : 10=0.05%, 20=0.22%, 50=1.84%, 100=74.74%, 250=23.15% 00:24:40.015 cpu : usr=0.27%, sys=2.67%, ctx=1704, majf=0, minf=4097 00:24:40.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:40.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.015 issued rwts: total=7293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.015 job3: (groupid=0, jobs=1): err= 0: pid=1047112: Fri Jul 26 13:35:35 2024 00:24:40.015 read: IOPS=593, BW=148MiB/s (156MB/s)(1497MiB/10091msec) 00:24:40.015 slat (usec): min=5, max=116102, avg=1507.30, stdev=5209.70 00:24:40.015 clat (msec): min=4, max=285, avg=106.22, stdev=38.59 00:24:40.015 lat (msec): min=4, max=285, avg=107.73, stdev=39.28 00:24:40.015 clat percentiles (msec): 00:24:40.015 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 74], 00:24:40.015 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 114], 60.00th=[ 120], 00:24:40.015 | 70.00th=[ 126], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 161], 00:24:40.015 | 99.00th=[ 194], 99.50th=[ 209], 99.90th=[ 228], 99.95th=[ 234], 00:24:40.015 | 99.99th=[ 288] 00:24:40.015 bw ( KiB/s): min=103424, max=224256, per=7.01%, avg=151606.40, stdev=38289.90, samples=20 00:24:40.015 iops : min= 404, max= 876, avg=592.20, stdev=149.55, samples=20 00:24:40.015 lat (msec) : 10=0.18%, 20=1.95%, 50=9.89%, 100=19.93%, 250=68.03% 00:24:40.015 lat (msec) : 500=0.02% 00:24:40.015 cpu : usr=0.24%, sys=1.87%, ctx=1544, majf=0, minf=4097 00:24:40.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:40.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.015 issued rwts: total=5986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.015 job4: (groupid=0, jobs=1): err= 0: pid=1047115: Fri Jul 26 13:35:35 2024 00:24:40.015 read: IOPS=661, BW=165MiB/s (173MB/s)(1658MiB/10030msec) 00:24:40.015 slat (usec): min=8, max=86449, avg=1386.86, stdev=4336.34 00:24:40.015 clat (msec): min=8, max=212, avg=95.27, stdev=36.87 00:24:40.015 lat (msec): min=8, max=241, avg=96.66, stdev=37.47 00:24:40.015 clat percentiles (msec): 00:24:40.015 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 56], 20.00th=[ 64], 00:24:40.015 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 109], 00:24:40.015 | 70.00th=[ 118], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 159], 00:24:40.015 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 194], 00:24:40.015 | 99.99th=[ 213] 00:24:40.015 bw ( KiB/s): min=103424, max=275456, per=7.77%, avg=168115.20, stdev=57841.40, samples=20 00:24:40.015 iops : min= 404, max= 1076, avg=656.70, stdev=225.94, samples=20 00:24:40.016 lat (msec) : 10=0.06%, 20=0.84%, 50=5.19%, 100=49.14%, 250=44.77% 00:24:40.016 cpu : usr=0.23%, sys=2.07%, ctx=1635, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job5: (groupid=0, jobs=1): err= 0: pid=1047118: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=702, BW=176MiB/s (184MB/s)(1767MiB/10061msec) 00:24:40.016 slat (usec): min=6, max=70721, avg=1318.09, stdev=3932.50 00:24:40.016 clat (msec): min=7, max=207, avg=89.67, stdev=39.39 00:24:40.016 lat (msec): min=7, max=207, avg=90.99, stdev=39.96 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 00:24:40.016 | 30.00th=[ 58], 40.00th=[ 74], 50.00th=[ 95], 60.00th=[ 108], 00:24:40.016 | 70.00th=[ 115], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 155], 00:24:40.016 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 180], 99.95th=[ 180], 00:24:40.016 | 99.99th=[ 209] 00:24:40.016 bw ( KiB/s): min=107520, max=401920, per=8.29%, avg=179364.95, stdev=89150.13, samples=20 00:24:40.016 iops : min= 420, max= 1570, avg=700.60, stdev=348.28, samples=20 00:24:40.016 lat (msec) : 10=0.01%, 20=0.18%, 50=26.26%, 100=25.96%, 250=47.59% 00:24:40.016 cpu : usr=0.21%, sys=2.14%, ctx=1690, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=7069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job6: (groupid=0, jobs=1): err= 0: pid=1047119: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=555, BW=139MiB/s (146MB/s)(1401MiB/10090msec) 00:24:40.016 slat (usec): min=6, max=121580, avg=1525.73, stdev=5113.76 00:24:40.016 clat (msec): min=6, max=227, avg=113.52, stdev=34.34 00:24:40.016 lat (msec): min=6, max=255, avg=115.04, stdev=34.92 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 71], 20.00th=[ 92], 00:24:40.016 | 30.00th=[ 107], 40.00th=[ 114], 50.00th=[ 120], 60.00th=[ 125], 00:24:40.016 | 70.00th=[ 130], 80.00th=[ 138], 90.00th=[ 150], 95.00th=[ 163], 00:24:40.016 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 228], 00:24:40.016 | 99.99th=[ 228] 00:24:40.016 bw ( KiB/s): min=101376, max=222208, per=6.56%, avg=141849.60, stdev=27534.73, samples=20 00:24:40.016 iops : min= 396, max= 868, avg=554.10, stdev=107.56, samples=20 00:24:40.016 lat (msec) : 10=0.27%, 20=1.07%, 50=7.12%, 100=15.86%, 250=75.68% 00:24:40.016 cpu : usr=0.24%, sys=1.85%, ctx=1515, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=5605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job7: (groupid=0, jobs=1): err= 0: pid=1047120: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=747, BW=187MiB/s (196MB/s)(1880MiB/10063msec) 00:24:40.016 slat (usec): min=8, max=29623, avg=1315.80, stdev=3230.51 00:24:40.016 clat (msec): min=21, max=165, avg=84.21, stdev=22.92 00:24:40.016 lat (msec): min=21, max=165, avg=85.53, stdev=23.28 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 65], 00:24:40.016 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 86], 00:24:40.016 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 122], 95.00th=[ 130], 00:24:40.016 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 167], 00:24:40.016 | 99.99th=[ 167] 00:24:40.016 bw ( KiB/s): min=123392, max=280576, per=8.83%, avg=190848.00, stdev=44262.60, samples=20 00:24:40.016 iops : min= 482, max= 1096, avg=745.50, stdev=172.90, samples=20 00:24:40.016 lat (msec) : 50=1.90%, 100=76.68%, 250=21.42% 00:24:40.016 cpu : usr=0.28%, sys=2.73%, ctx=1693, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=7518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job8: (groupid=0, jobs=1): err= 0: pid=1047121: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=1333, BW=333MiB/s (350MB/s)(3353MiB/10060msec) 00:24:40.016 slat (usec): min=6, max=27000, avg=733.21, stdev=1887.02 00:24:40.016 clat (msec): min=8, max=121, avg=47.21, stdev=21.76 00:24:40.016 lat (msec): min=8, max=135, avg=47.95, stdev=22.08 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 25], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 32], 00:24:40.016 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 40], 00:24:40.016 | 70.00th=[ 47], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:24:40.016 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 113], 99.95th=[ 114], 00:24:40.016 | 99.99th=[ 122] 00:24:40.016 bw ( KiB/s): min=169984, max=535552, per=15.80%, avg=341760.00, stdev=134081.71, samples=20 00:24:40.016 iops : min= 664, max= 2092, avg=1335.00, stdev=523.76, samples=20 00:24:40.016 lat (msec) : 10=0.03%, 20=0.16%, 50=71.29%, 100=27.26%, 250=1.27% 00:24:40.016 cpu : usr=0.50%, sys=3.90%, ctx=2954, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=13413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job9: (groupid=0, jobs=1): err= 0: pid=1047122: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=643, BW=161MiB/s (169MB/s)(1623MiB/10084msec) 00:24:40.016 slat (usec): min=8, max=480022, avg=1407.43, stdev=7393.58 00:24:40.016 clat (msec): min=13, max=642, avg=97.92, stdev=48.69 00:24:40.016 lat (msec): min=13, max=643, avg=99.32, stdev=49.32 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 23], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 70], 00:24:40.016 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 100], 00:24:40.016 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 130], 95.00th=[ 146], 00:24:40.016 | 99.00th=[ 275], 99.50th=[ 489], 99.90th=[ 493], 99.95th=[ 493], 00:24:40.016 | 99.99th=[ 642] 00:24:40.016 bw ( KiB/s): min=31744, max=258560, per=7.61%, avg=164564.70, stdev=52368.50, samples=20 00:24:40.016 iops : min= 124, max= 1010, avg=642.80, stdev=204.56, samples=20 00:24:40.016 lat (msec) : 20=0.60%, 50=2.23%, 100=58.67%, 250=37.32%, 500=1.16% 00:24:40.016 lat (msec) : 750=0.02% 00:24:40.016 cpu : usr=0.25%, sys=2.19%, ctx=1441, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=6492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 job10: (groupid=0, jobs=1): err= 0: pid=1047123: Fri Jul 26 13:35:35 2024 00:24:40.016 read: IOPS=610, BW=153MiB/s (160MB/s)(1538MiB/10080msec) 00:24:40.016 slat (usec): min=7, max=104829, avg=1339.91, stdev=5608.71 00:24:40.016 clat (msec): min=2, max=333, avg=103.45, stdev=52.54 00:24:40.016 lat (msec): min=2, max=333, avg=104.79, stdev=53.04 00:24:40.016 clat percentiles (msec): 00:24:40.016 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 50], 20.00th=[ 64], 00:24:40.016 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 108], 00:24:40.016 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 161], 95.00th=[ 192], 00:24:40.016 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], 00:24:40.016 | 99.99th=[ 334] 00:24:40.016 bw ( KiB/s): min=69120, max=218624, per=7.21%, avg=155827.20, stdev=44884.58, samples=20 00:24:40.016 iops : min= 270, max= 854, avg=608.70, stdev=175.33, samples=20 00:24:40.016 lat (msec) : 4=0.26%, 10=0.93%, 20=0.88%, 50=8.07%, 100=43.67% 00:24:40.016 lat (msec) : 250=43.67%, 500=2.52% 00:24:40.016 cpu : usr=0.18%, sys=1.91%, ctx=1303, majf=0, minf=4097 00:24:40.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:40.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:40.016 issued rwts: total=6150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.016 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:40.016 00:24:40.016 Run status group 0 (all jobs): 00:24:40.016 READ: bw=2112MiB/s (2214MB/s), 139MiB/s-333MiB/s (146MB/s-350MB/s), io=20.8GiB (22.3GB), run=10019-10091msec 00:24:40.016 00:24:40.016 Disk stats (read/write): 00:24:40.016 nvme0n1: ios=19100/0, merge=0/0, ticks=1216947/0, in_queue=1216947, util=96.40% 00:24:40.016 nvme10n1: ios=18297/0, merge=0/0, ticks=1216485/0, in_queue=1216485, util=96.68% 00:24:40.016 nvme1n1: ios=14331/0, merge=0/0, ticks=1212788/0, in_queue=1212788, util=97.01% 00:24:40.016 nvme2n1: ios=11693/0, merge=0/0, ticks=1210420/0, in_queue=1210420, util=97.23% 00:24:40.016 nvme3n1: ios=12858/0, merge=0/0, ticks=1211207/0, in_queue=1211207, util=97.34% 00:24:40.016 nvme4n1: ios=13678/0, merge=0/0, ticks=1218441/0, in_queue=1218441, util=97.76% 00:24:40.016 nvme5n1: ios=10792/0, merge=0/0, ticks=1213964/0, in_queue=1213964, util=98.04% 00:24:40.016 nvme6n1: ios=14692/0, merge=0/0, ticks=1209588/0, in_queue=1209588, util=98.22% 00:24:40.017 nvme7n1: ios=26304/0, merge=0/0, ticks=1217707/0, in_queue=1217707, util=98.76% 00:24:40.017 nvme8n1: ios=12674/0, merge=0/0, ticks=1213627/0, in_queue=1213627, util=98.92% 00:24:40.017 nvme9n1: ios=11940/0, merge=0/0, ticks=1221720/0, in_queue=1221720, util=99.09% 00:24:40.017 13:35:35 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:40.017 [global] 00:24:40.017 thread=1 00:24:40.017 invalidate=1 00:24:40.017 rw=randwrite 00:24:40.017 time_based=1 00:24:40.017 runtime=10 00:24:40.017 ioengine=libaio 00:24:40.017 direct=1 00:24:40.017 bs=262144 00:24:40.017 iodepth=64 00:24:40.017 norandommap=1 00:24:40.017 numjobs=1 00:24:40.017 00:24:40.017 [job0] 00:24:40.017 filename=/dev/nvme0n1 00:24:40.017 [job1] 00:24:40.017 filename=/dev/nvme10n1 00:24:40.017 [job2] 00:24:40.017 filename=/dev/nvme1n1 00:24:40.017 [job3] 00:24:40.017 filename=/dev/nvme2n1 00:24:40.017 [job4] 00:24:40.017 filename=/dev/nvme3n1 00:24:40.017 [job5] 00:24:40.017 filename=/dev/nvme4n1 00:24:40.017 [job6] 00:24:40.017 filename=/dev/nvme5n1 00:24:40.017 [job7] 00:24:40.017 filename=/dev/nvme6n1 00:24:40.017 [job8] 00:24:40.017 filename=/dev/nvme7n1 00:24:40.017 [job9] 00:24:40.017 filename=/dev/nvme8n1 00:24:40.017 [job10] 00:24:40.017 filename=/dev/nvme9n1 00:24:40.017 Could not set queue depth (nvme0n1) 00:24:40.017 Could not set queue depth (nvme10n1) 00:24:40.017 Could not set queue depth (nvme1n1) 00:24:40.017 Could not set queue depth (nvme2n1) 00:24:40.017 Could not set queue depth (nvme3n1) 00:24:40.017 Could not set queue depth (nvme4n1) 00:24:40.017 Could not set queue depth (nvme5n1) 00:24:40.017 Could not set queue depth (nvme6n1) 00:24:40.017 Could not set queue depth (nvme7n1) 00:24:40.017 Could not set queue depth (nvme8n1) 00:24:40.017 Could not set queue depth (nvme9n1) 00:24:40.017 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:40.017 fio-3.35 00:24:40.017 Starting 11 threads 00:24:50.030 00:24:50.030 job0: (groupid=0, jobs=1): err= 0: pid=1049228: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=579, BW=145MiB/s (152MB/s)(1467MiB/10128msec); 0 zone resets 00:24:50.030 slat (usec): min=23, max=133389, avg=1592.85, stdev=4187.18 00:24:50.030 clat (msec): min=8, max=316, avg=108.79, stdev=36.92 00:24:50.030 lat (msec): min=8, max=316, avg=110.39, stdev=37.03 00:24:50.030 clat percentiles (msec): 00:24:50.030 | 1.00th=[ 46], 5.00th=[ 74], 10.00th=[ 80], 20.00th=[ 87], 00:24:50.030 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 106], 00:24:50.030 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 144], 95.00th=[ 190], 00:24:50.030 | 99.00th=[ 262], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 313], 00:24:50.030 | 99.99th=[ 317] 00:24:50.030 bw ( KiB/s): min=88576, max=201216, per=9.42%, avg=148621.50, stdev=27958.73, samples=20 00:24:50.030 iops : min= 346, max= 786, avg=580.55, stdev=109.22, samples=20 00:24:50.030 lat (msec) : 10=0.03%, 20=0.26%, 50=0.80%, 100=49.37%, 250=48.24% 00:24:50.030 lat (msec) : 500=1.30% 00:24:50.030 cpu : usr=1.32%, sys=1.97%, ctx=1750, majf=0, minf=1 00:24:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.030 issued rwts: total=0,5868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.030 job1: (groupid=0, jobs=1): err= 0: pid=1049241: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=571, BW=143MiB/s (150MB/s)(1443MiB/10101msec); 0 zone resets 00:24:50.030 slat (usec): min=27, max=36115, avg=1671.48, stdev=3170.66 00:24:50.030 clat (msec): min=33, max=215, avg=110.27, stdev=21.86 00:24:50.030 lat (msec): min=33, max=215, avg=111.94, stdev=21.98 00:24:50.030 clat percentiles (msec): 00:24:50.030 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 88], 20.00th=[ 93], 00:24:50.030 | 30.00th=[ 97], 40.00th=[ 102], 50.00th=[ 107], 60.00th=[ 113], 00:24:50.030 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 138], 95.00th=[ 159], 00:24:50.030 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 199], 99.95th=[ 207], 00:24:50.030 | 99.99th=[ 215] 00:24:50.030 bw ( KiB/s): min=98304, max=177664, per=9.27%, avg=146193.75, stdev=21815.92, samples=20 00:24:50.030 iops : min= 384, max= 694, avg=571.05, stdev=85.19, samples=20 00:24:50.030 lat (msec) : 50=0.21%, 100=37.14%, 250=62.65% 00:24:50.030 cpu : usr=1.37%, sys=1.57%, ctx=1644, majf=0, minf=1 00:24:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.030 issued rwts: total=0,5773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.030 job2: (groupid=0, jobs=1): err= 0: pid=1049242: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=529, BW=132MiB/s (139MB/s)(1338MiB/10109msec); 0 zone resets 00:24:50.030 slat (usec): min=31, max=52539, avg=1812.85, stdev=3735.75 00:24:50.030 clat (msec): min=9, max=295, avg=119.04, stdev=30.62 00:24:50.030 lat (msec): min=9, max=295, avg=120.86, stdev=30.83 00:24:50.030 clat percentiles (msec): 00:24:50.030 | 1.00th=[ 56], 5.00th=[ 85], 10.00th=[ 91], 20.00th=[ 99], 00:24:50.030 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 117], 00:24:50.030 | 70.00th=[ 128], 80.00th=[ 138], 90.00th=[ 157], 95.00th=[ 167], 00:24:50.030 | 99.00th=[ 230], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 284], 00:24:50.030 | 99.99th=[ 296] 00:24:50.030 bw ( KiB/s): min=82432, max=174080, per=8.58%, avg=135330.45, stdev=25989.12, samples=20 00:24:50.030 iops : min= 322, max= 680, avg=528.60, stdev=101.48, samples=20 00:24:50.030 lat (msec) : 10=0.02%, 20=0.17%, 50=0.49%, 100=23.07%, 250=75.78% 00:24:50.030 lat (msec) : 500=0.49% 00:24:50.030 cpu : usr=1.22%, sys=1.47%, ctx=1525, majf=0, minf=1 00:24:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.030 issued rwts: total=0,5350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.030 job3: (groupid=0, jobs=1): err= 0: pid=1049243: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=574, BW=144MiB/s (150MB/s)(1449MiB/10096msec); 0 zone resets 00:24:50.030 slat (usec): min=23, max=91908, avg=1629.85, stdev=3889.27 00:24:50.030 clat (msec): min=6, max=300, avg=109.77, stdev=35.45 00:24:50.030 lat (msec): min=9, max=300, avg=111.40, stdev=35.88 00:24:50.030 clat percentiles (msec): 00:24:50.030 | 1.00th=[ 30], 5.00th=[ 67], 10.00th=[ 79], 20.00th=[ 87], 00:24:50.030 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 109], 00:24:50.030 | 70.00th=[ 118], 80.00th=[ 133], 90.00th=[ 155], 95.00th=[ 180], 00:24:50.030 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 259], 99.95th=[ 300], 00:24:50.030 | 99.99th=[ 300] 00:24:50.030 bw ( KiB/s): min=75776, max=197120, per=9.30%, avg=146780.15, stdev=31775.47, samples=20 00:24:50.030 iops : min= 296, max= 770, avg=573.35, stdev=124.12, samples=20 00:24:50.030 lat (msec) : 10=0.03%, 20=0.38%, 50=2.42%, 100=44.34%, 250=52.73% 00:24:50.030 lat (msec) : 500=0.10% 00:24:50.030 cpu : usr=1.53%, sys=1.83%, ctx=1806, majf=0, minf=1 00:24:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.030 issued rwts: total=0,5796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.030 job4: (groupid=0, jobs=1): err= 0: pid=1049244: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=417, BW=104MiB/s (109MB/s)(1057MiB/10131msec); 0 zone resets 00:24:50.030 slat (usec): min=21, max=198603, avg=2064.54, stdev=7810.81 00:24:50.030 clat (msec): min=10, max=433, avg=151.22, stdev=85.93 00:24:50.030 lat (msec): min=10, max=456, avg=153.28, stdev=86.89 00:24:50.030 clat percentiles (msec): 00:24:50.030 | 1.00th=[ 25], 5.00th=[ 46], 10.00th=[ 71], 20.00th=[ 88], 00:24:50.030 | 30.00th=[ 101], 40.00th=[ 112], 50.00th=[ 123], 60.00th=[ 136], 00:24:50.030 | 70.00th=[ 178], 80.00th=[ 224], 90.00th=[ 279], 95.00th=[ 342], 00:24:50.030 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 430], 99.95th=[ 430], 00:24:50.030 | 99.99th=[ 435] 00:24:50.030 bw ( KiB/s): min=47616, max=199168, per=6.76%, avg=106610.75, stdev=39880.18, samples=20 00:24:50.030 iops : min= 186, max= 778, avg=416.40, stdev=155.76, samples=20 00:24:50.030 lat (msec) : 20=0.33%, 50=5.70%, 100=23.70%, 250=56.99%, 500=13.27% 00:24:50.030 cpu : usr=0.90%, sys=1.34%, ctx=1574, majf=0, minf=1 00:24:50.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:50.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.030 issued rwts: total=0,4227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.030 job5: (groupid=0, jobs=1): err= 0: pid=1049245: Fri Jul 26 13:35:46 2024 00:24:50.030 write: IOPS=529, BW=132MiB/s (139MB/s)(1343MiB/10140msec); 0 zone resets 00:24:50.030 slat (usec): min=20, max=108431, avg=1750.96, stdev=3882.81 00:24:50.031 clat (msec): min=16, max=283, avg=118.83, stdev=36.55 00:24:50.031 lat (msec): min=18, max=348, avg=120.58, stdev=36.96 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 39], 5.00th=[ 71], 10.00th=[ 80], 20.00th=[ 93], 00:24:50.031 | 30.00th=[ 99], 40.00th=[ 104], 50.00th=[ 110], 60.00th=[ 120], 00:24:50.031 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 167], 95.00th=[ 188], 00:24:50.031 | 99.00th=[ 236], 99.50th=[ 241], 99.90th=[ 275], 99.95th=[ 275], 00:24:50.031 | 99.99th=[ 284] 00:24:50.031 bw ( KiB/s): min=82432, max=204184, per=8.62%, avg=135943.95, stdev=31427.95, samples=20 00:24:50.031 iops : min= 322, max= 797, avg=531.00, stdev=122.70, samples=20 00:24:50.031 lat (msec) : 20=0.11%, 50=1.45%, 100=31.66%, 250=66.67%, 500=0.11% 00:24:50.031 cpu : usr=1.21%, sys=1.64%, ctx=1681, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,5373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 job6: (groupid=0, jobs=1): err= 0: pid=1049246: Fri Jul 26 13:35:46 2024 00:24:50.031 write: IOPS=589, BW=147MiB/s (154MB/s)(1487MiB/10100msec); 0 zone resets 00:24:50.031 slat (usec): min=21, max=97268, avg=1469.22, stdev=3413.05 00:24:50.031 clat (msec): min=6, max=387, avg=107.12, stdev=39.25 00:24:50.031 lat (msec): min=9, max=387, avg=108.59, stdev=39.62 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 28], 5.00th=[ 62], 10.00th=[ 75], 20.00th=[ 89], 00:24:50.031 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 106], 00:24:50.031 | 70.00th=[ 111], 80.00th=[ 123], 90.00th=[ 144], 95.00th=[ 161], 00:24:50.031 | 99.00th=[ 284], 99.50th=[ 342], 99.90th=[ 384], 99.95th=[ 384], 00:24:50.031 | 99.99th=[ 388] 00:24:50.031 bw ( KiB/s): min=75264, max=186880, per=9.55%, avg=150690.10, stdev=31251.90, samples=20 00:24:50.031 iops : min= 294, max= 730, avg=588.60, stdev=122.15, samples=20 00:24:50.031 lat (msec) : 10=0.03%, 20=0.39%, 50=2.56%, 100=44.41%, 250=51.37% 00:24:50.031 lat (msec) : 500=1.24% 00:24:50.031 cpu : usr=1.27%, sys=1.89%, ctx=2227, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,5949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 job7: (groupid=0, jobs=1): err= 0: pid=1049247: Fri Jul 26 13:35:46 2024 00:24:50.031 write: IOPS=673, BW=168MiB/s (176MB/s)(1695MiB/10070msec); 0 zone resets 00:24:50.031 slat (usec): min=21, max=145647, avg=1470.21, stdev=3439.88 00:24:50.031 clat (msec): min=19, max=228, avg=93.55, stdev=35.37 00:24:50.031 lat (msec): min=19, max=238, avg=95.02, stdev=35.79 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 59], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 70], 00:24:50.031 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 84], 00:24:50.031 | 70.00th=[ 92], 80.00th=[ 115], 90.00th=[ 157], 95.00th=[ 176], 00:24:50.031 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 222], 99.95th=[ 224], 00:24:50.031 | 99.99th=[ 228] 00:24:50.031 bw ( KiB/s): min=88064, max=233984, per=10.90%, avg=171941.10, stdev=51602.87, samples=20 00:24:50.031 iops : min= 344, max= 914, avg=671.60, stdev=201.63, samples=20 00:24:50.031 lat (msec) : 20=0.06%, 50=0.12%, 100=73.85%, 250=25.98% 00:24:50.031 cpu : usr=1.64%, sys=2.08%, ctx=1715, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,6779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 job8: (groupid=0, jobs=1): err= 0: pid=1049248: Fri Jul 26 13:35:46 2024 00:24:50.031 write: IOPS=627, BW=157MiB/s (164MB/s)(1583MiB/10093msec); 0 zone resets 00:24:50.031 slat (usec): min=26, max=34019, avg=1557.34, stdev=2915.49 00:24:50.031 clat (msec): min=5, max=232, avg=100.45, stdev=20.57 00:24:50.031 lat (msec): min=8, max=232, avg=102.01, stdev=20.71 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 71], 5.00th=[ 78], 10.00th=[ 82], 20.00th=[ 86], 00:24:50.031 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 102], 00:24:50.031 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 134], 00:24:50.031 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 220], 99.95th=[ 224], 00:24:50.031 | 99.99th=[ 234] 00:24:50.031 bw ( KiB/s): min=109056, max=189440, per=10.17%, avg=160447.85, stdev=20714.18, samples=20 00:24:50.031 iops : min= 426, max= 740, avg=626.70, stdev=81.00, samples=20 00:24:50.031 lat (msec) : 10=0.03%, 20=0.16%, 50=0.32%, 100=56.49%, 250=43.00% 00:24:50.031 cpu : usr=1.57%, sys=2.08%, ctx=1679, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,6330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 job9: (groupid=0, jobs=1): err= 0: pid=1049255: Fri Jul 26 13:35:46 2024 00:24:50.031 write: IOPS=555, BW=139MiB/s (146MB/s)(1405MiB/10107msec); 0 zone resets 00:24:50.031 slat (usec): min=26, max=44028, avg=1741.60, stdev=3352.55 00:24:50.031 clat (msec): min=16, max=215, avg=113.36, stdev=26.01 00:24:50.031 lat (msec): min=16, max=215, avg=115.10, stdev=26.19 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 66], 5.00th=[ 75], 10.00th=[ 82], 20.00th=[ 90], 00:24:50.031 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 113], 60.00th=[ 118], 00:24:50.031 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 165], 00:24:50.031 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 207], 99.95th=[ 207], 00:24:50.031 | 99.99th=[ 215] 00:24:50.031 bw ( KiB/s): min=96256, max=209920, per=9.01%, avg=142223.65, stdev=27333.48, samples=20 00:24:50.031 iops : min= 376, max= 820, avg=555.55, stdev=106.77, samples=20 00:24:50.031 lat (msec) : 20=0.11%, 50=0.52%, 100=29.28%, 250=70.10% 00:24:50.031 cpu : usr=1.38%, sys=1.81%, ctx=1548, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,5618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 job10: (groupid=0, jobs=1): err= 0: pid=1049262: Fri Jul 26 13:35:46 2024 00:24:50.031 write: IOPS=537, BW=134MiB/s (141MB/s)(1358MiB/10104msec); 0 zone resets 00:24:50.031 slat (usec): min=22, max=53073, avg=1756.96, stdev=3416.98 00:24:50.031 clat (msec): min=6, max=253, avg=117.25, stdev=29.98 00:24:50.031 lat (msec): min=9, max=253, avg=119.01, stdev=30.30 00:24:50.031 clat percentiles (msec): 00:24:50.031 | 1.00th=[ 41], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 96], 00:24:50.031 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 115], 00:24:50.031 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 159], 95.00th=[ 167], 00:24:50.031 | 99.00th=[ 211], 99.50th=[ 228], 99.90th=[ 245], 99.95th=[ 253], 00:24:50.031 | 99.99th=[ 253] 00:24:50.031 bw ( KiB/s): min=100553, max=169472, per=8.71%, avg=137456.45, stdev=24203.27, samples=20 00:24:50.031 iops : min= 392, max= 662, avg=536.90, stdev=94.61, samples=20 00:24:50.031 lat (msec) : 10=0.04%, 20=0.26%, 50=1.25%, 100=27.12%, 250=71.28% 00:24:50.031 lat (msec) : 500=0.06% 00:24:50.031 cpu : usr=1.20%, sys=1.98%, ctx=1612, majf=0, minf=1 00:24:50.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:50.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:50.031 issued rwts: total=0,5432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.031 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:50.031 00:24:50.031 Run status group 0 (all jobs): 00:24:50.031 WRITE: bw=1541MiB/s (1616MB/s), 104MiB/s-168MiB/s (109MB/s-176MB/s), io=15.3GiB (16.4GB), run=10070-10140msec 00:24:50.031 00:24:50.031 Disk stats (read/write): 00:24:50.031 nvme0n1: ios=53/11702, merge=0/0, ticks=3702/1218109, in_queue=1221811, util=100.00% 00:24:50.031 nvme10n1: ios=48/11540, merge=0/0, ticks=140/1230317, in_queue=1230457, util=97.46% 00:24:50.031 nvme1n1: ios=50/10685, merge=0/0, ticks=2883/1228211, in_queue=1231094, util=99.89% 00:24:50.031 nvme2n1: ios=33/11202, merge=0/0, ticks=1360/1195753, in_queue=1197113, util=100.00% 00:24:50.031 nvme3n1: ios=45/8419, merge=0/0, ticks=3071/1177755, in_queue=1180826, util=100.00% 00:24:50.031 nvme4n1: ios=44/10689, merge=0/0, ticks=495/1227188, in_queue=1227683, util=99.97% 00:24:50.031 nvme5n1: ios=32/11893, merge=0/0, ticks=2767/1234816, in_queue=1237583, util=100.00% 00:24:50.031 nvme6n1: ios=40/13188, merge=0/0, ticks=1362/1191075, in_queue=1192437, util=99.99% 00:24:50.031 nvme7n1: ios=0/12348, merge=0/0, ticks=0/1193303, in_queue=1193303, util=98.63% 00:24:50.031 nvme8n1: ios=0/11226, merge=0/0, ticks=0/1227778, in_queue=1227778, util=98.90% 00:24:50.031 nvme9n1: ios=0/10854, merge=0/0, ticks=0/1230320, in_queue=1230320, util=99.09% 00:24:50.031 13:35:46 -- target/multiconnection.sh@36 -- # sync 00:24:50.031 13:35:46 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:50.031 13:35:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.031 13:35:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:50.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:50.031 13:35:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:50.031 13:35:47 -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.031 13:35:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:50.031 13:35:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:24:50.031 13:35:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:50.031 13:35:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:50.031 13:35:47 -- common/autotest_common.sh@1210 -- # return 0 00:24:50.031 13:35:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.031 13:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.031 13:35:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.031 13:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.031 13:35:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.031 13:35:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:50.293 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:50.293 13:35:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:50.293 13:35:47 -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.293 13:35:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:50.293 13:35:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:24:50.293 13:35:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:50.293 13:35:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:50.293 13:35:47 -- common/autotest_common.sh@1210 -- # return 0 00:24:50.293 13:35:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:50.293 13:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.293 13:35:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.293 13:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.293 13:35:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.293 13:35:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:50.554 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:50.554 13:35:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:50.554 13:35:47 -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.554 13:35:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:50.554 13:35:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:24:50.554 13:35:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:50.554 13:35:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:50.554 13:35:47 -- common/autotest_common.sh@1210 -- # return 0 00:24:50.554 13:35:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:50.554 13:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.554 13:35:47 -- common/autotest_common.sh@10 -- # set +x 00:24:50.554 13:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.554 13:35:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.554 13:35:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:50.815 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:50.815 13:35:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:50.815 13:35:48 -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.815 13:35:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:50.815 13:35:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:24:50.815 13:35:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:50.815 13:35:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:50.815 13:35:48 -- common/autotest_common.sh@1210 -- # return 0 00:24:50.815 13:35:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:50.815 13:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.815 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:24:50.815 13:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.815 13:35:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.815 13:35:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:51.076 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:51.076 13:35:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:51.076 13:35:48 -- common/autotest_common.sh@1198 -- # local i=0 00:24:51.076 13:35:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:51.076 13:35:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:24:51.076 13:35:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:51.076 13:35:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:51.076 13:35:48 -- common/autotest_common.sh@1210 -- # return 0 00:24:51.076 13:35:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:51.076 13:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.076 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:24:51.076 13:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.076 13:35:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.076 13:35:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:51.336 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:51.336 13:35:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:51.336 13:35:48 -- common/autotest_common.sh@1198 -- # local i=0 00:24:51.336 13:35:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:51.336 13:35:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:24:51.336 13:35:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:51.336 13:35:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:51.336 13:35:48 -- common/autotest_common.sh@1210 -- # return 0 00:24:51.336 13:35:48 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:51.336 13:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.336 13:35:48 -- common/autotest_common.sh@10 -- # set +x 00:24:51.336 13:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.336 13:35:48 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.336 13:35:48 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:51.597 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:51.597 13:35:48 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:51.597 13:35:48 -- common/autotest_common.sh@1198 -- # local i=0 00:24:51.597 13:35:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:51.597 13:35:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:24:51.597 13:35:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:51.597 13:35:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:51.597 13:35:49 -- common/autotest_common.sh@1210 -- # return 0 00:24:51.597 13:35:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:51.597 13:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.597 13:35:49 -- common/autotest_common.sh@10 -- # set +x 00:24:51.597 13:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.597 13:35:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.597 13:35:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:51.858 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:51.858 13:35:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:51.858 13:35:49 -- common/autotest_common.sh@1198 -- # local i=0 00:24:51.858 13:35:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:51.858 13:35:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:24:51.858 13:35:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:51.858 13:35:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:51.858 13:35:49 -- common/autotest_common.sh@1210 -- # return 0 00:24:51.858 13:35:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:51.858 13:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.858 13:35:49 -- common/autotest_common.sh@10 -- # set +x 00:24:51.858 13:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.858 13:35:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.858 13:35:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:52.119 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:52.119 13:35:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:52.119 13:35:49 -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.119 13:35:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:52.119 13:35:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:24:52.119 13:35:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:52.119 13:35:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.119 13:35:49 -- common/autotest_common.sh@1210 -- # return 0 00:24:52.119 13:35:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:52.119 13:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.119 13:35:49 -- common/autotest_common.sh@10 -- # set +x 00:24:52.119 13:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.119 13:35:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.119 13:35:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:52.119 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:52.119 13:35:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:52.119 13:35:49 -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.119 13:35:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:52.119 13:35:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:24:52.119 13:35:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.119 13:35:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:52.119 13:35:49 -- common/autotest_common.sh@1210 -- # return 0 00:24:52.119 13:35:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:52.119 13:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.119 13:35:49 -- common/autotest_common.sh@10 -- # set +x 00:24:52.119 13:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.119 13:35:49 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.119 13:35:49 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:52.380 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:52.380 13:35:49 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:52.380 13:35:49 -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.380 13:35:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:52.380 13:35:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:24:52.380 13:35:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:52.380 13:35:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:52.380 13:35:49 -- common/autotest_common.sh@1210 -- # return 0 00:24:52.380 13:35:49 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:52.380 13:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:52.380 13:35:49 -- common/autotest_common.sh@10 -- # set +x 00:24:52.380 13:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:52.380 13:35:49 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:52.380 13:35:49 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:52.380 13:35:49 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:52.380 13:35:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:52.380 13:35:49 -- nvmf/common.sh@116 -- # sync 00:24:52.380 13:35:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:52.380 13:35:49 -- nvmf/common.sh@119 -- # set +e 00:24:52.380 13:35:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:52.380 13:35:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:52.380 rmmod nvme_tcp 00:24:52.380 rmmod nvme_fabrics 00:24:52.380 rmmod nvme_keyring 00:24:52.380 13:35:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:52.380 13:35:49 -- nvmf/common.sh@123 -- # set -e 00:24:52.380 13:35:49 -- nvmf/common.sh@124 -- # return 0 00:24:52.380 13:35:49 -- nvmf/common.sh@477 -- # '[' -n 1038247 ']' 00:24:52.380 13:35:49 -- nvmf/common.sh@478 -- # killprocess 1038247 00:24:52.380 13:35:49 -- common/autotest_common.sh@926 -- # '[' -z 1038247 ']' 00:24:52.380 13:35:49 -- common/autotest_common.sh@930 -- # kill -0 1038247 00:24:52.380 13:35:49 -- common/autotest_common.sh@931 -- # uname 00:24:52.641 13:35:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:52.641 13:35:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1038247 00:24:52.641 13:35:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:52.641 13:35:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:52.641 13:35:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1038247' 00:24:52.641 killing process with pid 1038247 00:24:52.641 13:35:49 -- common/autotest_common.sh@945 -- # kill 1038247 00:24:52.641 13:35:49 -- common/autotest_common.sh@950 -- # wait 1038247 00:24:52.902 13:35:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:52.902 13:35:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:52.902 13:35:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:52.902 13:35:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.902 13:35:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:52.902 13:35:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.902 13:35:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.902 13:35:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.818 13:35:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:54.818 00:24:54.818 real 1m17.354s 00:24:54.818 user 4m57.654s 00:24:54.818 sys 0m20.728s 00:24:54.818 13:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.818 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:24:54.818 ************************************ 00:24:54.818 END TEST nvmf_multiconnection 00:24:54.818 ************************************ 00:24:55.080 13:35:52 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:55.080 13:35:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:55.080 13:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.080 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:24:55.080 ************************************ 00:24:55.080 START TEST nvmf_initiator_timeout 00:24:55.080 ************************************ 00:24:55.080 13:35:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:55.080 * Looking for test storage... 00:24:55.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:55.080 13:35:52 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.080 13:35:52 -- nvmf/common.sh@7 -- # uname -s 00:24:55.080 13:35:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.080 13:35:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.080 13:35:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.080 13:35:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.080 13:35:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.080 13:35:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.080 13:35:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.080 13:35:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.080 13:35:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.080 13:35:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.080 13:35:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.080 13:35:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.080 13:35:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.080 13:35:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.080 13:35:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.080 13:35:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.080 13:35:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.080 13:35:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.080 13:35:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.080 13:35:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.080 13:35:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.081 13:35:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.081 13:35:52 -- paths/export.sh@5 -- # export PATH 00:24:55.081 13:35:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.081 13:35:52 -- nvmf/common.sh@46 -- # : 0 00:24:55.081 13:35:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:55.081 13:35:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:55.081 13:35:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:55.081 13:35:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.081 13:35:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.081 13:35:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:55.081 13:35:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:55.081 13:35:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:55.081 13:35:52 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.081 13:35:52 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.081 13:35:52 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:55.081 13:35:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:55.081 13:35:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.081 13:35:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:55.081 13:35:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:55.081 13:35:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:55.081 13:35:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.081 13:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.081 13:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.081 13:35:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:55.081 13:35:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:55.081 13:35:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:55.081 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:25:01.739 13:35:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:01.739 13:35:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:01.739 13:35:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:01.739 13:35:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:01.739 13:35:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:01.739 13:35:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:01.739 13:35:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:01.739 13:35:58 -- nvmf/common.sh@294 -- # net_devs=() 00:25:01.739 13:35:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:01.739 13:35:58 -- nvmf/common.sh@295 -- # e810=() 00:25:01.739 13:35:58 -- nvmf/common.sh@295 -- # local -ga e810 00:25:01.739 13:35:58 -- nvmf/common.sh@296 -- # x722=() 00:25:01.739 13:35:58 -- nvmf/common.sh@296 -- # local -ga x722 00:25:01.739 13:35:58 -- nvmf/common.sh@297 -- # mlx=() 00:25:01.739 13:35:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:01.739 13:35:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.739 13:35:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:01.739 13:35:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:01.739 13:35:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:01.739 13:35:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:01.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:01.739 13:35:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:01.739 13:35:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:01.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:01.739 13:35:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:01.739 13:35:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.739 13:35:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.739 13:35:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:01.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:01.739 13:35:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.739 13:35:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:01.739 13:35:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.739 13:35:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.739 13:35:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:01.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:01.739 13:35:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.739 13:35:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:01.739 13:35:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:01.739 13:35:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:01.739 13:35:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.739 13:35:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.739 13:35:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.739 13:35:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:01.739 13:35:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.739 13:35:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.739 13:35:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:01.739 13:35:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.739 13:35:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.739 13:35:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:01.739 13:35:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:01.740 13:35:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.740 13:35:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.740 13:35:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.740 13:35:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.740 13:35:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:01.740 13:35:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.001 13:35:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.001 13:35:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.001 13:35:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:25:02.001 00:25:02.001 --- 10.0.0.2 ping statistics --- 00:25:02.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.001 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:25:02.001 13:35:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:25:02.001 00:25:02.001 --- 10.0.0.1 ping statistics --- 00:25:02.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.001 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:25:02.001 13:35:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.001 13:35:59 -- nvmf/common.sh@410 -- # return 0 00:25:02.001 13:35:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.001 13:35:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.001 13:35:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.001 13:35:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.001 13:35:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.001 13:35:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.001 13:35:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.001 13:35:59 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:02.001 13:35:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.001 13:35:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.001 13:35:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.001 13:35:59 -- nvmf/common.sh@469 -- # nvmfpid=1055917 00:25:02.001 13:35:59 -- nvmf/common.sh@470 -- # waitforlisten 1055917 00:25:02.001 13:35:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.001 13:35:59 -- common/autotest_common.sh@819 -- # '[' -z 1055917 ']' 00:25:02.002 13:35:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.002 13:35:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.002 13:35:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.002 13:35:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.002 13:35:59 -- common/autotest_common.sh@10 -- # set +x 00:25:02.002 [2024-07-26 13:35:59.388844] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:02.002 [2024-07-26 13:35:59.388903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.002 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.002 [2024-07-26 13:35:59.456141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.263 [2024-07-26 13:35:59.485811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.263 [2024-07-26 13:35:59.485943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.263 [2024-07-26 13:35:59.485953] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.263 [2024-07-26 13:35:59.485962] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.263 [2024-07-26 13:35:59.486108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.263 [2024-07-26 13:35:59.486261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.263 [2024-07-26 13:35:59.486569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.263 [2024-07-26 13:35:59.486569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.835 13:36:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.835 13:36:00 -- common/autotest_common.sh@852 -- # return 0 00:25:02.835 13:36:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:02.835 13:36:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 13:36:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 Malloc0 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 Delay0 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 [2024-07-26 13:36:00.230744] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.835 13:36:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:02.835 13:36:00 -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 [2024-07-26 13:36:00.267767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.835 13:36:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:02.835 13:36:00 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:04.750 13:36:01 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:04.750 13:36:01 -- common/autotest_common.sh@1177 -- # local i=0 00:25:04.751 13:36:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.751 13:36:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:04.751 13:36:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:06.698 13:36:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:06.698 13:36:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:06.698 13:36:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:06.698 13:36:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:06.698 13:36:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.698 13:36:03 -- common/autotest_common.sh@1187 -- # return 0 00:25:06.698 13:36:03 -- target/initiator_timeout.sh@35 -- # fio_pid=1056817 00:25:06.698 13:36:03 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:06.698 13:36:03 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:06.698 [global] 00:25:06.698 thread=1 00:25:06.698 invalidate=1 00:25:06.698 rw=write 00:25:06.698 time_based=1 00:25:06.698 runtime=60 00:25:06.698 ioengine=libaio 00:25:06.698 direct=1 00:25:06.698 bs=4096 00:25:06.698 iodepth=1 00:25:06.698 norandommap=0 00:25:06.698 numjobs=1 00:25:06.698 00:25:06.698 verify_dump=1 00:25:06.698 verify_backlog=512 00:25:06.698 verify_state_save=0 00:25:06.698 do_verify=1 00:25:06.698 verify=crc32c-intel 00:25:06.698 [job0] 00:25:06.698 filename=/dev/nvme0n1 00:25:06.698 Could not set queue depth (nvme0n1) 00:25:06.963 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:06.963 fio-3.35 00:25:06.963 Starting 1 thread 00:25:09.511 13:36:06 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:09.511 13:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.511 13:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:09.511 true 00:25:09.511 13:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.511 13:36:06 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:09.511 13:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.511 13:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:09.511 true 00:25:09.511 13:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.511 13:36:06 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:09.511 13:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.511 13:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:09.511 true 00:25:09.511 13:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.511 13:36:06 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:09.511 13:36:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.512 13:36:06 -- common/autotest_common.sh@10 -- # set +x 00:25:09.512 true 00:25:09.512 13:36:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.512 13:36:06 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:12.815 13:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.815 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 true 00:25:12.815 13:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:12.815 13:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.815 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 true 00:25:12.815 13:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:12.815 13:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.815 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 true 00:25:12.815 13:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:12.815 13:36:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.815 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:25:12.815 true 00:25:12.815 13:36:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:12.815 13:36:09 -- target/initiator_timeout.sh@54 -- # wait 1056817 00:26:09.094 00:26:09.094 job0: (groupid=0, jobs=1): err= 0: pid=1057120: Fri Jul 26 13:37:04 2024 00:26:09.094 read: IOPS=44, BW=180KiB/s (184kB/s)(10.5MiB/60036msec) 00:26:09.094 slat (usec): min=7, max=6417, avg=29.08, stdev=123.08 00:26:09.094 clat (usec): min=1126, max=42193k, avg=21103.10, stdev=812145.98 00:26:09.094 lat (usec): min=1153, max=42193k, avg=21132.19, stdev=812146.05 00:26:09.094 clat percentiles (usec): 00:26:09.094 | 1.00th=[ 1205], 5.00th=[ 1270], 10.00th=[ 1303], 00:26:09.094 | 20.00th=[ 1336], 30.00th=[ 1352], 40.00th=[ 1369], 00:26:09.094 | 50.00th=[ 1369], 60.00th=[ 1385], 70.00th=[ 1401], 00:26:09.094 | 80.00th=[ 1418], 90.00th=[ 41681], 95.00th=[ 42206], 00:26:09.094 | 99.00th=[ 42730], 99.50th=[ 42730], 99.90th=[ 43254], 00:26:09.094 | 99.95th=[ 43779], 99.99th=[17112761] 00:26:09.094 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60036msec); 0 zone resets 00:26:09.094 slat (usec): min=9, max=31796, avg=43.55, stdev=573.11 00:26:09.094 clat (usec): min=527, max=1431, avg=917.15, stdev=90.13 00:26:09.094 lat (usec): min=555, max=32679, avg=960.70, stdev=579.76 00:26:09.094 clat percentiles (usec): 00:26:09.094 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 783], 20.00th=[ 857], 00:26:09.094 | 30.00th=[ 889], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 955], 00:26:09.094 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1004], 95.00th=[ 1020], 00:26:09.094 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1156], 99.95th=[ 1237], 00:26:09.094 | 99.99th=[ 1434] 00:26:09.094 bw ( KiB/s): min= 80, max= 4096, per=100.00%, avg=2234.18, stdev=1458.30, samples=11 00:26:09.094 iops : min= 20, max= 1024, avg=558.55, stdev=364.57, samples=11 00:26:09.094 lat (usec) : 750=3.95%, 1000=43.55% 00:26:09.094 lat (msec) : 2=47.77%, 50=4.71%, >=2000=0.02% 00:26:09.094 cpu : usr=0.19%, sys=0.37%, ctx=5776, majf=0, minf=1 00:26:09.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.094 issued rwts: total=2699,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:09.094 00:26:09.094 Run status group 0 (all jobs): 00:26:09.094 READ: bw=180KiB/s (184kB/s), 180KiB/s-180KiB/s (184kB/s-184kB/s), io=10.5MiB (11.1MB), run=60036-60036msec 00:26:09.094 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60036-60036msec 00:26:09.094 00:26:09.094 Disk stats (read/write): 00:26:09.094 nvme0n1: ios=2748/3072, merge=0/0, ticks=14787/2604, in_queue=17391, util=99.90% 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:09.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:09.094 13:37:04 -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.094 13:37:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:09.094 13:37:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:09.094 13:37:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:09.094 13:37:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:09.094 13:37:04 -- common/autotest_common.sh@1210 -- # return 0 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:09.094 nvmf hotplug test: fio successful as expected 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.094 13:37:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.094 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:09.094 13:37:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:09.094 13:37:04 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:09.094 13:37:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:09.094 13:37:04 -- nvmf/common.sh@116 -- # sync 00:26:09.094 13:37:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:09.094 13:37:04 -- nvmf/common.sh@119 -- # set +e 00:26:09.094 13:37:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:09.094 13:37:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:09.094 rmmod nvme_tcp 00:26:09.094 rmmod nvme_fabrics 00:26:09.094 rmmod nvme_keyring 00:26:09.095 13:37:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:09.095 13:37:04 -- nvmf/common.sh@123 -- # set -e 00:26:09.095 13:37:04 -- nvmf/common.sh@124 -- # return 0 00:26:09.095 13:37:04 -- nvmf/common.sh@477 -- # '[' -n 1055917 ']' 00:26:09.095 13:37:04 -- nvmf/common.sh@478 -- # killprocess 1055917 00:26:09.095 13:37:04 -- common/autotest_common.sh@926 -- # '[' -z 1055917 ']' 00:26:09.095 13:37:04 -- common/autotest_common.sh@930 -- # kill -0 1055917 00:26:09.095 13:37:04 -- common/autotest_common.sh@931 -- # uname 00:26:09.095 13:37:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:09.095 13:37:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1055917 00:26:09.095 13:37:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:09.095 13:37:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:09.095 13:37:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1055917' 00:26:09.095 killing process with pid 1055917 00:26:09.095 13:37:04 -- common/autotest_common.sh@945 -- # kill 1055917 00:26:09.095 13:37:04 -- common/autotest_common.sh@950 -- # wait 1055917 00:26:09.095 13:37:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:09.095 13:37:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:09.095 13:37:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:09.095 13:37:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.095 13:37:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:09.095 13:37:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.095 13:37:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.095 13:37:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.722 13:37:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:09.722 00:26:09.722 real 1m14.613s 00:26:09.722 user 4m37.930s 00:26:09.722 sys 0m6.775s 00:26:09.722 13:37:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.722 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:09.722 ************************************ 00:26:09.722 END TEST nvmf_initiator_timeout 00:26:09.722 ************************************ 00:26:09.722 13:37:06 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:09.722 13:37:06 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:09.722 13:37:06 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:09.722 13:37:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:09.722 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:16.309 13:37:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:16.309 13:37:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:16.309 13:37:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:16.309 13:37:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:16.309 13:37:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:16.309 13:37:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:16.309 13:37:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:16.309 13:37:13 -- nvmf/common.sh@294 -- # net_devs=() 00:26:16.309 13:37:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:16.309 13:37:13 -- nvmf/common.sh@295 -- # e810=() 00:26:16.309 13:37:13 -- nvmf/common.sh@295 -- # local -ga e810 00:26:16.309 13:37:13 -- nvmf/common.sh@296 -- # x722=() 00:26:16.309 13:37:13 -- nvmf/common.sh@296 -- # local -ga x722 00:26:16.309 13:37:13 -- nvmf/common.sh@297 -- # mlx=() 00:26:16.309 13:37:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:16.309 13:37:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.309 13:37:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:16.310 13:37:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:16.310 13:37:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:16.310 13:37:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:16.310 13:37:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:16.310 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:16.310 13:37:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:16.310 13:37:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:16.310 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:16.310 13:37:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:16.310 13:37:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:16.310 13:37:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.310 13:37:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:16.310 13:37:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.310 13:37:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:16.310 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:16.310 13:37:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.310 13:37:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:16.310 13:37:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.310 13:37:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:16.310 13:37:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.310 13:37:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:16.310 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:16.310 13:37:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.310 13:37:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:16.310 13:37:13 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.310 13:37:13 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:16.310 13:37:13 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:16.310 13:37:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:16.310 13:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.310 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:16.310 ************************************ 00:26:16.310 START TEST nvmf_perf_adq 00:26:16.310 ************************************ 00:26:16.310 13:37:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:16.310 * Looking for test storage... 00:26:16.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.310 13:37:13 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.310 13:37:13 -- nvmf/common.sh@7 -- # uname -s 00:26:16.310 13:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.310 13:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.310 13:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.310 13:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.310 13:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.310 13:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.310 13:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.310 13:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.310 13:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.310 13:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.310 13:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.310 13:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.310 13:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.310 13:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.310 13:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.310 13:37:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.310 13:37:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.310 13:37:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.310 13:37:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.310 13:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.310 13:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.310 13:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.310 13:37:13 -- paths/export.sh@5 -- # export PATH 00:26:16.310 13:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.310 13:37:13 -- nvmf/common.sh@46 -- # : 0 00:26:16.310 13:37:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:16.310 13:37:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:16.310 13:37:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:16.310 13:37:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.310 13:37:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.310 13:37:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:16.310 13:37:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:16.310 13:37:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:16.310 13:37:13 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:16.310 13:37:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:16.310 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.453 13:37:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:24.453 13:37:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:24.453 13:37:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:24.453 13:37:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:24.453 13:37:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:24.453 13:37:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:24.453 13:37:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:24.453 13:37:20 -- nvmf/common.sh@294 -- # net_devs=() 00:26:24.453 13:37:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:24.453 13:37:20 -- nvmf/common.sh@295 -- # e810=() 00:26:24.453 13:37:20 -- nvmf/common.sh@295 -- # local -ga e810 00:26:24.453 13:37:20 -- nvmf/common.sh@296 -- # x722=() 00:26:24.453 13:37:20 -- nvmf/common.sh@296 -- # local -ga x722 00:26:24.453 13:37:20 -- nvmf/common.sh@297 -- # mlx=() 00:26:24.453 13:37:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:24.453 13:37:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.453 13:37:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.453 13:37:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.453 13:37:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.454 13:37:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:24.454 13:37:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:24.454 13:37:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:24.454 13:37:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.454 13:37:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:24.454 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:24.454 13:37:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.454 13:37:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:24.454 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:24.454 13:37:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:24.454 13:37:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:24.454 13:37:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.454 13:37:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.454 13:37:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.454 13:37:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.454 13:37:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:24.454 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:24.454 13:37:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.454 13:37:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.454 13:37:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.454 13:37:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.454 13:37:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.454 13:37:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:24.454 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:24.454 13:37:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.454 13:37:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:24.454 13:37:20 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.454 13:37:20 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:24.454 13:37:20 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:24.454 13:37:20 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:24.454 13:37:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:24.715 13:37:22 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:26.629 13:37:24 -- target/perf_adq.sh@54 -- # sleep 5 00:26:31.924 13:37:29 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:31.924 13:37:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:31.924 13:37:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.924 13:37:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:31.924 13:37:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:31.924 13:37:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:31.924 13:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.924 13:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.924 13:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.924 13:37:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:31.924 13:37:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:31.924 13:37:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:31.924 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:26:31.924 13:37:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:31.924 13:37:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:31.924 13:37:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:31.924 13:37:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:31.924 13:37:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:31.924 13:37:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:31.924 13:37:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:31.924 13:37:29 -- nvmf/common.sh@294 -- # net_devs=() 00:26:31.924 13:37:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:31.924 13:37:29 -- nvmf/common.sh@295 -- # e810=() 00:26:31.924 13:37:29 -- nvmf/common.sh@295 -- # local -ga e810 00:26:31.924 13:37:29 -- nvmf/common.sh@296 -- # x722=() 00:26:31.924 13:37:29 -- nvmf/common.sh@296 -- # local -ga x722 00:26:31.924 13:37:29 -- nvmf/common.sh@297 -- # mlx=() 00:26:31.924 13:37:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:31.924 13:37:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.924 13:37:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.925 13:37:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.925 13:37:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.925 13:37:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.925 13:37:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.925 13:37:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:31.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:31.925 13:37:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.925 13:37:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:31.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:31.925 13:37:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.925 13:37:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.925 13:37:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.925 13:37:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:31.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:31.925 13:37:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.925 13:37:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.925 13:37:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.925 13:37:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:31.925 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:31.925 13:37:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:31.925 13:37:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:31.925 13:37:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.925 13:37:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.925 13:37:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:31.925 13:37:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.925 13:37:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.925 13:37:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:31.925 13:37:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.925 13:37:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.925 13:37:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:31.925 13:37:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:31.925 13:37:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.925 13:37:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.925 13:37:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.925 13:37:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.925 13:37:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:31.925 13:37:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.925 13:37:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.925 13:37:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.925 13:37:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:31.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.761 ms 00:26:31.925 00:26:31.925 --- 10.0.0.2 ping statistics --- 00:26:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.925 rtt min/avg/max/mdev = 0.761/0.761/0.761/0.000 ms 00:26:31.925 13:37:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:26:31.925 00:26:31.925 --- 10.0.0.1 ping statistics --- 00:26:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.925 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:26:31.925 13:37:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.925 13:37:29 -- nvmf/common.sh@410 -- # return 0 00:26:31.925 13:37:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:31.925 13:37:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.925 13:37:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:31.925 13:37:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.925 13:37:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:31.925 13:37:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:31.925 13:37:29 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:31.925 13:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:31.925 13:37:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:31.925 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:26:31.925 13:37:29 -- nvmf/common.sh@469 -- # nvmfpid=1078721 00:26:32.186 13:37:29 -- nvmf/common.sh@470 -- # waitforlisten 1078721 00:26:32.186 13:37:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:32.186 13:37:29 -- common/autotest_common.sh@819 -- # '[' -z 1078721 ']' 00:26:32.186 13:37:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.186 13:37:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.186 13:37:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.186 13:37:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.186 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.186 [2024-07-26 13:37:29.453206] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:32.186 [2024-07-26 13:37:29.453275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.186 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.186 [2024-07-26 13:37:29.524810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.186 [2024-07-26 13:37:29.563038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:32.186 [2024-07-26 13:37:29.563185] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.186 [2024-07-26 13:37:29.563197] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.186 [2024-07-26 13:37:29.563214] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.186 [2024-07-26 13:37:29.563304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.186 [2024-07-26 13:37:29.563416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.186 [2024-07-26 13:37:29.563577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.186 [2024-07-26 13:37:29.563579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.758 13:37:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:32.758 13:37:30 -- common/autotest_common.sh@852 -- # return 0 00:26:32.758 13:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:32.758 13:37:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:32.758 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 13:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.019 13:37:30 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:33.019 13:37:30 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:33.019 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.019 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.019 13:37:30 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:33.019 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.019 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.019 13:37:30 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:33.019 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.019 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.019 [2024-07-26 13:37:30.355111] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.019 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.020 13:37:30 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:33.020 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.020 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 Malloc1 00:26:33.020 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.020 13:37:30 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.020 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.020 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.020 13:37:30 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:33.020 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.020 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.020 13:37:30 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.020 13:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.020 13:37:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.020 [2024-07-26 13:37:30.411992] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.020 13:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.020 13:37:30 -- target/perf_adq.sh@73 -- # perfpid=1078902 00:26:33.020 13:37:30 -- target/perf_adq.sh@74 -- # sleep 2 00:26:33.020 13:37:30 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:33.020 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.565 13:37:32 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:35.565 13:37:32 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:35.565 13:37:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.565 13:37:32 -- target/perf_adq.sh@76 -- # wc -l 00:26:35.565 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:26:35.565 13:37:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.565 13:37:32 -- target/perf_adq.sh@76 -- # count=4 00:26:35.565 13:37:32 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:35.565 13:37:32 -- target/perf_adq.sh@81 -- # wait 1078902 00:26:43.759 Initializing NVMe Controllers 00:26:43.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:43.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:43.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:43.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:43.759 Initialization complete. Launching workers. 00:26:43.759 ======================================================== 00:26:43.759 Latency(us) 00:26:43.759 Device Information : IOPS MiB/s Average min max 00:26:43.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11422.10 44.62 5604.58 1299.59 9579.18 00:26:43.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15159.20 59.22 4221.70 1283.48 13052.19 00:26:43.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14822.20 57.90 4317.66 1188.44 12232.09 00:26:43.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13120.80 51.25 4877.68 1333.63 11816.39 00:26:43.759 ======================================================== 00:26:43.759 Total : 54524.29 212.99 4695.34 1188.44 13052.19 00:26:43.759 00:26:43.759 13:37:40 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:43.759 13:37:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:43.759 13:37:40 -- nvmf/common.sh@116 -- # sync 00:26:43.759 13:37:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:43.759 13:37:40 -- nvmf/common.sh@119 -- # set +e 00:26:43.759 13:37:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:43.759 13:37:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:43.759 rmmod nvme_tcp 00:26:43.759 rmmod nvme_fabrics 00:26:43.759 rmmod nvme_keyring 00:26:43.759 13:37:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:43.759 13:37:40 -- nvmf/common.sh@123 -- # set -e 00:26:43.759 13:37:40 -- nvmf/common.sh@124 -- # return 0 00:26:43.759 13:37:40 -- nvmf/common.sh@477 -- # '[' -n 1078721 ']' 00:26:43.759 13:37:40 -- nvmf/common.sh@478 -- # killprocess 1078721 00:26:43.759 13:37:40 -- common/autotest_common.sh@926 -- # '[' -z 1078721 ']' 00:26:43.759 13:37:40 -- common/autotest_common.sh@930 -- # kill -0 1078721 00:26:43.759 13:37:40 -- common/autotest_common.sh@931 -- # uname 00:26:43.760 13:37:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:43.760 13:37:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1078721 00:26:43.760 13:37:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:43.760 13:37:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:43.760 13:37:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1078721' 00:26:43.760 killing process with pid 1078721 00:26:43.760 13:37:40 -- common/autotest_common.sh@945 -- # kill 1078721 00:26:43.760 13:37:40 -- common/autotest_common.sh@950 -- # wait 1078721 00:26:43.760 13:37:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:43.760 13:37:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:43.760 13:37:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:43.760 13:37:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.760 13:37:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:43.760 13:37:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.760 13:37:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.760 13:37:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.674 13:37:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:45.674 13:37:42 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:45.674 13:37:42 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:47.059 13:37:44 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:49.605 13:37:46 -- target/perf_adq.sh@54 -- # sleep 5 00:26:54.895 13:37:51 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:54.895 13:37:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:54.895 13:37:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.895 13:37:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:54.895 13:37:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:54.895 13:37:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:54.895 13:37:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.895 13:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.895 13:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.895 13:37:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:54.895 13:37:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:54.895 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:26:54.895 13:37:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:54.895 13:37:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:54.895 13:37:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:54.895 13:37:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:54.895 13:37:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:54.895 13:37:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:54.895 13:37:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:54.895 13:37:51 -- nvmf/common.sh@294 -- # net_devs=() 00:26:54.895 13:37:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:54.895 13:37:51 -- nvmf/common.sh@295 -- # e810=() 00:26:54.895 13:37:51 -- nvmf/common.sh@295 -- # local -ga e810 00:26:54.895 13:37:51 -- nvmf/common.sh@296 -- # x722=() 00:26:54.895 13:37:51 -- nvmf/common.sh@296 -- # local -ga x722 00:26:54.895 13:37:51 -- nvmf/common.sh@297 -- # mlx=() 00:26:54.895 13:37:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:54.895 13:37:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.895 13:37:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:54.895 13:37:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:54.895 13:37:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:54.895 13:37:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.895 13:37:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.895 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.895 13:37:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:54.895 13:37:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.895 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.895 13:37:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:54.895 13:37:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:54.895 13:37:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.895 13:37:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.895 13:37:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.896 13:37:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.896 13:37:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.896 13:37:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.896 13:37:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:54.896 13:37:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.896 13:37:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:54.896 13:37:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.896 13:37:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.896 13:37:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.896 13:37:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:54.896 13:37:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:54.896 13:37:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:54.896 13:37:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:54.896 13:37:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:54.896 13:37:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.896 13:37:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.896 13:37:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.896 13:37:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:54.896 13:37:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.896 13:37:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.896 13:37:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:54.896 13:37:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.896 13:37:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.896 13:37:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:54.896 13:37:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:54.896 13:37:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.896 13:37:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.896 13:37:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.896 13:37:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.896 13:37:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:54.896 13:37:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.896 13:37:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.896 13:37:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.896 13:37:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:54.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:26:54.896 00:26:54.896 --- 10.0.0.2 ping statistics --- 00:26:54.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.896 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:26:54.896 13:37:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:26:54.896 00:26:54.896 --- 10.0.0.1 ping statistics --- 00:26:54.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.896 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:26:54.896 13:37:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.896 13:37:51 -- nvmf/common.sh@410 -- # return 0 00:26:54.896 13:37:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:54.896 13:37:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.896 13:37:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:54.896 13:37:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:54.896 13:37:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.896 13:37:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:54.896 13:37:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:54.896 13:37:51 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:54.896 13:37:51 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:54.896 13:37:51 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:54.896 13:37:51 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:54.896 net.core.busy_poll = 1 00:26:54.896 13:37:51 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:54.896 net.core.busy_read = 1 00:26:54.896 13:37:51 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:54.896 13:37:51 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:54.896 13:37:52 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:54.896 13:37:52 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:54.896 13:37:52 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:54.896 13:37:52 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:54.896 13:37:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:54.896 13:37:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:54.896 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:26:54.896 13:37:52 -- nvmf/common.sh@469 -- # nvmfpid=1083522 00:26:54.896 13:37:52 -- nvmf/common.sh@470 -- # waitforlisten 1083522 00:26:54.896 13:37:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:54.896 13:37:52 -- common/autotest_common.sh@819 -- # '[' -z 1083522 ']' 00:26:54.896 13:37:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.896 13:37:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:54.896 13:37:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.896 13:37:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:54.896 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:26:54.896 [2024-07-26 13:37:52.177724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:54.896 [2024-07-26 13:37:52.177793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.896 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.896 [2024-07-26 13:37:52.251151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:54.896 [2024-07-26 13:37:52.289559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:54.896 [2024-07-26 13:37:52.289701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.896 [2024-07-26 13:37:52.289710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.896 [2024-07-26 13:37:52.289717] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.896 [2024-07-26 13:37:52.289894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.896 [2024-07-26 13:37:52.290053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.896 [2024-07-26 13:37:52.290054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.896 [2024-07-26 13:37:52.289915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.839 13:37:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:55.839 13:37:52 -- common/autotest_common.sh@852 -- # return 0 00:26:55.839 13:37:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:55.839 13:37:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:55.839 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 13:37:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.839 13:37:52 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:55.839 13:37:52 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:55.839 13:37:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 [2024-07-26 13:37:53.086122] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 Malloc1 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.839 13:37:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.839 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:26:55.839 [2024-07-26 13:37:53.141543] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.839 13:37:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.839 13:37:53 -- target/perf_adq.sh@94 -- # perfpid=1083765 00:26:55.839 13:37:53 -- target/perf_adq.sh@95 -- # sleep 2 00:26:55.839 13:37:53 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:55.839 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.754 13:37:55 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:57.754 13:37:55 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:57.754 13:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.754 13:37:55 -- target/perf_adq.sh@97 -- # wc -l 00:26:57.754 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:26:57.754 13:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.754 13:37:55 -- target/perf_adq.sh@97 -- # count=2 00:26:57.754 13:37:55 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:57.754 13:37:55 -- target/perf_adq.sh@103 -- # wait 1083765 00:27:05.907 Initializing NVMe Controllers 00:27:05.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:05.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:05.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:05.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:05.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:05.907 Initialization complete. Launching workers. 00:27:05.908 ======================================================== 00:27:05.908 Latency(us) 00:27:05.908 Device Information : IOPS MiB/s Average min max 00:27:05.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13967.49 54.56 4581.80 1048.78 50413.12 00:27:05.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7484.64 29.24 8550.41 1428.52 53177.24 00:27:05.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12324.70 48.14 5191.99 1142.56 49153.90 00:27:05.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9341.22 36.49 6850.91 1531.27 50194.85 00:27:05.908 ======================================================== 00:27:05.908 Total : 43118.05 168.43 5936.69 1048.78 53177.24 00:27:05.908 00:27:05.908 13:38:03 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:05.908 13:38:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:05.908 13:38:03 -- nvmf/common.sh@116 -- # sync 00:27:05.908 13:38:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:05.908 13:38:03 -- nvmf/common.sh@119 -- # set +e 00:27:05.908 13:38:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:05.908 13:38:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:05.908 rmmod nvme_tcp 00:27:05.908 rmmod nvme_fabrics 00:27:05.908 rmmod nvme_keyring 00:27:05.908 13:38:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:05.908 13:38:03 -- nvmf/common.sh@123 -- # set -e 00:27:05.908 13:38:03 -- nvmf/common.sh@124 -- # return 0 00:27:05.908 13:38:03 -- nvmf/common.sh@477 -- # '[' -n 1083522 ']' 00:27:05.908 13:38:03 -- nvmf/common.sh@478 -- # killprocess 1083522 00:27:05.908 13:38:03 -- common/autotest_common.sh@926 -- # '[' -z 1083522 ']' 00:27:05.908 13:38:03 -- common/autotest_common.sh@930 -- # kill -0 1083522 00:27:05.908 13:38:03 -- common/autotest_common.sh@931 -- # uname 00:27:05.908 13:38:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:05.908 13:38:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1083522 00:27:06.168 13:38:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:06.168 13:38:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:06.168 13:38:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1083522' 00:27:06.168 killing process with pid 1083522 00:27:06.168 13:38:03 -- common/autotest_common.sh@945 -- # kill 1083522 00:27:06.168 13:38:03 -- common/autotest_common.sh@950 -- # wait 1083522 00:27:06.168 13:38:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:06.168 13:38:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:06.168 13:38:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:06.168 13:38:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:06.168 13:38:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:06.168 13:38:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.168 13:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.168 13:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.718 13:38:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:08.718 13:38:05 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:08.718 00:27:08.718 real 0m51.985s 00:27:08.718 user 2m43.266s 00:27:08.718 sys 0m13.067s 00:27:08.718 13:38:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.718 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:08.718 ************************************ 00:27:08.718 END TEST nvmf_perf_adq 00:27:08.718 ************************************ 00:27:08.718 13:38:05 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:08.718 13:38:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:08.718 13:38:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.718 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:08.718 ************************************ 00:27:08.718 START TEST nvmf_shutdown 00:27:08.718 ************************************ 00:27:08.718 13:38:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:08.718 * Looking for test storage... 00:27:08.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:08.718 13:38:05 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.718 13:38:05 -- nvmf/common.sh@7 -- # uname -s 00:27:08.718 13:38:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.718 13:38:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.718 13:38:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.718 13:38:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.718 13:38:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.718 13:38:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.718 13:38:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.718 13:38:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.718 13:38:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.718 13:38:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.718 13:38:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.718 13:38:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.718 13:38:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.718 13:38:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.718 13:38:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.718 13:38:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.718 13:38:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.718 13:38:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.718 13:38:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.718 13:38:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.718 13:38:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.718 13:38:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.718 13:38:05 -- paths/export.sh@5 -- # export PATH 00:27:08.718 13:38:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.718 13:38:05 -- nvmf/common.sh@46 -- # : 0 00:27:08.718 13:38:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:08.718 13:38:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:08.718 13:38:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:08.718 13:38:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.718 13:38:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.718 13:38:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:08.718 13:38:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:08.718 13:38:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:08.718 13:38:05 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:08.718 13:38:05 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:08.718 13:38:05 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:08.718 13:38:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:08.718 13:38:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.718 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:08.718 ************************************ 00:27:08.718 START TEST nvmf_shutdown_tc1 00:27:08.718 ************************************ 00:27:08.718 13:38:05 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:08.718 13:38:05 -- target/shutdown.sh@74 -- # starttarget 00:27:08.718 13:38:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:08.718 13:38:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:08.718 13:38:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.718 13:38:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:08.718 13:38:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:08.718 13:38:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:08.718 13:38:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.718 13:38:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.718 13:38:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.718 13:38:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:08.718 13:38:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:08.718 13:38:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:08.718 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:15.311 13:38:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:15.311 13:38:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:15.311 13:38:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:15.311 13:38:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:15.311 13:38:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:15.311 13:38:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:15.311 13:38:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:15.311 13:38:12 -- nvmf/common.sh@294 -- # net_devs=() 00:27:15.311 13:38:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:15.311 13:38:12 -- nvmf/common.sh@295 -- # e810=() 00:27:15.311 13:38:12 -- nvmf/common.sh@295 -- # local -ga e810 00:27:15.311 13:38:12 -- nvmf/common.sh@296 -- # x722=() 00:27:15.311 13:38:12 -- nvmf/common.sh@296 -- # local -ga x722 00:27:15.311 13:38:12 -- nvmf/common.sh@297 -- # mlx=() 00:27:15.311 13:38:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:15.311 13:38:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.311 13:38:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:15.311 13:38:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:15.311 13:38:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:15.311 13:38:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.311 13:38:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:15.311 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:15.311 13:38:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.311 13:38:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.312 13:38:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:15.312 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:15.312 13:38:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:15.312 13:38:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.312 13:38:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.312 13:38:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.312 13:38:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.312 13:38:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:15.312 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:15.312 13:38:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.312 13:38:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.312 13:38:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.312 13:38:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.312 13:38:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.312 13:38:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:15.312 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:15.312 13:38:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.312 13:38:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:15.312 13:38:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:15.312 13:38:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:15.312 13:38:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:15.312 13:38:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.312 13:38:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.312 13:38:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.312 13:38:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:15.312 13:38:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.312 13:38:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.312 13:38:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:15.312 13:38:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.312 13:38:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.312 13:38:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:15.312 13:38:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:15.312 13:38:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.312 13:38:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.312 13:38:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.312 13:38:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.312 13:38:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:15.312 13:38:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.573 13:38:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.574 13:38:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.574 13:38:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:15.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:27:15.574 00:27:15.574 --- 10.0.0.2 ping statistics --- 00:27:15.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.574 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:27:15.574 13:38:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:27:15.574 00:27:15.574 --- 10.0.0.1 ping statistics --- 00:27:15.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.574 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:27:15.574 13:38:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.574 13:38:12 -- nvmf/common.sh@410 -- # return 0 00:27:15.574 13:38:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:15.574 13:38:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.574 13:38:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:15.574 13:38:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:15.574 13:38:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.574 13:38:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:15.574 13:38:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:15.574 13:38:12 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:15.574 13:38:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:15.574 13:38:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:15.574 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:15.574 13:38:12 -- nvmf/common.sh@469 -- # nvmfpid=1089922 00:27:15.574 13:38:12 -- nvmf/common.sh@470 -- # waitforlisten 1089922 00:27:15.574 13:38:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:15.574 13:38:12 -- common/autotest_common.sh@819 -- # '[' -z 1089922 ']' 00:27:15.574 13:38:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.574 13:38:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:15.574 13:38:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.574 13:38:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:15.574 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:15.574 [2024-07-26 13:38:13.027510] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:15.574 [2024-07-26 13:38:13.027577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.835 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.835 [2024-07-26 13:38:13.115326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.835 [2024-07-26 13:38:13.162378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:15.835 [2024-07-26 13:38:13.162524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.835 [2024-07-26 13:38:13.162535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.835 [2024-07-26 13:38:13.162544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.835 [2024-07-26 13:38:13.162683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.835 [2024-07-26 13:38:13.162845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.835 [2024-07-26 13:38:13.163001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.835 [2024-07-26 13:38:13.163002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.405 13:38:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:16.405 13:38:13 -- common/autotest_common.sh@852 -- # return 0 00:27:16.405 13:38:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:16.405 13:38:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:16.405 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:16.405 13:38:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.405 13:38:13 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.405 13:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.405 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:16.405 [2024-07-26 13:38:13.846377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.405 13:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.405 13:38:13 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:16.405 13:38:13 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:16.405 13:38:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:16.405 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:16.405 13:38:13 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.405 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.405 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.405 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.405 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.405 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.405 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.666 13:38:13 -- target/shutdown.sh@28 -- # cat 00:27:16.666 13:38:13 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:16.666 13:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.666 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:16.666 Malloc1 00:27:16.666 [2024-07-26 13:38:13.949826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.666 Malloc2 00:27:16.666 Malloc3 00:27:16.666 Malloc4 00:27:16.666 Malloc5 00:27:16.666 Malloc6 00:27:16.927 Malloc7 00:27:16.927 Malloc8 00:27:16.927 Malloc9 00:27:16.927 Malloc10 00:27:16.927 13:38:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.927 13:38:14 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:16.927 13:38:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:16.927 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:16.927 13:38:14 -- target/shutdown.sh@78 -- # perfpid=1090313 00:27:16.927 13:38:14 -- target/shutdown.sh@79 -- # waitforlisten 1090313 /var/tmp/bdevperf.sock 00:27:16.927 13:38:14 -- common/autotest_common.sh@819 -- # '[' -z 1090313 ']' 00:27:16.927 13:38:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.927 13:38:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:16.927 13:38:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.927 13:38:14 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:16.927 13:38:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:16.927 13:38:14 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:16.927 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:16.927 13:38:14 -- nvmf/common.sh@520 -- # config=() 00:27:16.927 13:38:14 -- nvmf/common.sh@520 -- # local subsystem config 00:27:16.927 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.927 { 00:27:16.927 "params": { 00:27:16.927 "name": "Nvme$subsystem", 00:27:16.927 "trtype": "$TEST_TRANSPORT", 00:27:16.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.927 "adrfam": "ipv4", 00:27:16.927 "trsvcid": "$NVMF_PORT", 00:27:16.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.927 "hdgst": ${hdgst:-false}, 00:27:16.927 "ddgst": ${ddgst:-false} 00:27:16.927 }, 00:27:16.927 "method": "bdev_nvme_attach_controller" 00:27:16.927 } 00:27:16.927 EOF 00:27:16.927 )") 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.927 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.927 { 00:27:16.927 "params": { 00:27:16.927 "name": "Nvme$subsystem", 00:27:16.927 "trtype": "$TEST_TRANSPORT", 00:27:16.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.927 "adrfam": "ipv4", 00:27:16.927 "trsvcid": "$NVMF_PORT", 00:27:16.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.927 "hdgst": ${hdgst:-false}, 00:27:16.927 "ddgst": ${ddgst:-false} 00:27:16.927 }, 00:27:16.927 "method": "bdev_nvme_attach_controller" 00:27:16.927 } 00:27:16.927 EOF 00:27:16.927 )") 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.927 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.927 { 00:27:16.927 "params": { 00:27:16.927 "name": "Nvme$subsystem", 00:27:16.927 "trtype": "$TEST_TRANSPORT", 00:27:16.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.927 "adrfam": "ipv4", 00:27:16.927 "trsvcid": "$NVMF_PORT", 00:27:16.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.927 "hdgst": ${hdgst:-false}, 00:27:16.927 "ddgst": ${ddgst:-false} 00:27:16.927 }, 00:27:16.927 "method": "bdev_nvme_attach_controller" 00:27:16.927 } 00:27:16.927 EOF 00:27:16.927 )") 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.927 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.927 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.927 { 00:27:16.927 "params": { 00:27:16.927 "name": "Nvme$subsystem", 00:27:16.928 "trtype": "$TEST_TRANSPORT", 00:27:16.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.928 "adrfam": "ipv4", 00:27:16.928 "trsvcid": "$NVMF_PORT", 00:27:16.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.928 "hdgst": ${hdgst:-false}, 00:27:16.928 "ddgst": ${ddgst:-false} 00:27:16.928 }, 00:27:16.928 "method": "bdev_nvme_attach_controller" 00:27:16.928 } 00:27:16.928 EOF 00:27:16.928 )") 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.928 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.928 { 00:27:16.928 "params": { 00:27:16.928 "name": "Nvme$subsystem", 00:27:16.928 "trtype": "$TEST_TRANSPORT", 00:27:16.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.928 "adrfam": "ipv4", 00:27:16.928 "trsvcid": "$NVMF_PORT", 00:27:16.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.928 "hdgst": ${hdgst:-false}, 00:27:16.928 "ddgst": ${ddgst:-false} 00:27:16.928 }, 00:27:16.928 "method": "bdev_nvme_attach_controller" 00:27:16.928 } 00:27:16.928 EOF 00:27:16.928 )") 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.928 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.928 { 00:27:16.928 "params": { 00:27:16.928 "name": "Nvme$subsystem", 00:27:16.928 "trtype": "$TEST_TRANSPORT", 00:27:16.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.928 "adrfam": "ipv4", 00:27:16.928 "trsvcid": "$NVMF_PORT", 00:27:16.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.928 "hdgst": ${hdgst:-false}, 00:27:16.928 "ddgst": ${ddgst:-false} 00:27:16.928 }, 00:27:16.928 "method": "bdev_nvme_attach_controller" 00:27:16.928 } 00:27:16.928 EOF 00:27:16.928 )") 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:16.928 [2024-07-26 13:38:14.394953] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:16.928 [2024-07-26 13:38:14.395004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:16.928 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.928 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.928 { 00:27:16.928 "params": { 00:27:16.928 "name": "Nvme$subsystem", 00:27:16.928 "trtype": "$TEST_TRANSPORT", 00:27:16.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.928 "adrfam": "ipv4", 00:27:16.928 "trsvcid": "$NVMF_PORT", 00:27:16.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.928 "hdgst": ${hdgst:-false}, 00:27:16.928 "ddgst": ${ddgst:-false} 00:27:16.928 }, 00:27:16.928 "method": "bdev_nvme_attach_controller" 00:27:16.928 } 00:27:16.928 EOF 00:27:16.928 )") 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:17.189 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:17.189 { 00:27:17.189 "params": { 00:27:17.189 "name": "Nvme$subsystem", 00:27:17.189 "trtype": "$TEST_TRANSPORT", 00:27:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.189 "adrfam": "ipv4", 00:27:17.189 "trsvcid": "$NVMF_PORT", 00:27:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.189 "hdgst": ${hdgst:-false}, 00:27:17.189 "ddgst": ${ddgst:-false} 00:27:17.189 }, 00:27:17.189 "method": "bdev_nvme_attach_controller" 00:27:17.189 } 00:27:17.189 EOF 00:27:17.189 )") 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:17.189 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:17.189 { 00:27:17.189 "params": { 00:27:17.189 "name": "Nvme$subsystem", 00:27:17.189 "trtype": "$TEST_TRANSPORT", 00:27:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.189 "adrfam": "ipv4", 00:27:17.189 "trsvcid": "$NVMF_PORT", 00:27:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.189 "hdgst": ${hdgst:-false}, 00:27:17.189 "ddgst": ${ddgst:-false} 00:27:17.189 }, 00:27:17.189 "method": "bdev_nvme_attach_controller" 00:27:17.189 } 00:27:17.189 EOF 00:27:17.189 )") 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:17.189 13:38:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:17.189 { 00:27:17.189 "params": { 00:27:17.189 "name": "Nvme$subsystem", 00:27:17.189 "trtype": "$TEST_TRANSPORT", 00:27:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.189 "adrfam": "ipv4", 00:27:17.189 "trsvcid": "$NVMF_PORT", 00:27:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.189 "hdgst": ${hdgst:-false}, 00:27:17.189 "ddgst": ${ddgst:-false} 00:27:17.189 }, 00:27:17.189 "method": "bdev_nvme_attach_controller" 00:27:17.189 } 00:27:17.189 EOF 00:27:17.189 )") 00:27:17.189 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.189 13:38:14 -- nvmf/common.sh@542 -- # cat 00:27:17.189 13:38:14 -- nvmf/common.sh@544 -- # jq . 00:27:17.189 13:38:14 -- nvmf/common.sh@545 -- # IFS=, 00:27:17.189 13:38:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:17.189 "params": { 00:27:17.189 "name": "Nvme1", 00:27:17.189 "trtype": "tcp", 00:27:17.189 "traddr": "10.0.0.2", 00:27:17.189 "adrfam": "ipv4", 00:27:17.189 "trsvcid": "4420", 00:27:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.189 "hdgst": false, 00:27:17.189 "ddgst": false 00:27:17.189 }, 00:27:17.189 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme2", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme3", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme4", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme5", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme6", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme7", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme8", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme9", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 },{ 00:27:17.190 "params": { 00:27:17.190 "name": "Nvme10", 00:27:17.190 "trtype": "tcp", 00:27:17.190 "traddr": "10.0.0.2", 00:27:17.190 "adrfam": "ipv4", 00:27:17.190 "trsvcid": "4420", 00:27:17.190 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:17.190 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:17.190 "hdgst": false, 00:27:17.190 "ddgst": false 00:27:17.190 }, 00:27:17.190 "method": "bdev_nvme_attach_controller" 00:27:17.190 }' 00:27:17.190 [2024-07-26 13:38:14.455389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.190 [2024-07-26 13:38:14.484394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.106 13:38:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.106 13:38:16 -- common/autotest_common.sh@852 -- # return 0 00:27:19.106 13:38:16 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:19.106 13:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.106 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:27:19.106 13:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.106 13:38:16 -- target/shutdown.sh@83 -- # kill -9 1090313 00:27:19.106 13:38:16 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:19.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1090313 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:19.106 13:38:16 -- target/shutdown.sh@87 -- # sleep 1 00:27:20.049 13:38:17 -- target/shutdown.sh@88 -- # kill -0 1089922 00:27:20.049 13:38:17 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:20.049 13:38:17 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:20.049 13:38:17 -- nvmf/common.sh@520 -- # config=() 00:27:20.049 13:38:17 -- nvmf/common.sh@520 -- # local subsystem config 00:27:20.049 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.049 { 00:27:20.049 "params": { 00:27:20.049 "name": "Nvme$subsystem", 00:27:20.049 "trtype": "$TEST_TRANSPORT", 00:27:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.049 "adrfam": "ipv4", 00:27:20.049 "trsvcid": "$NVMF_PORT", 00:27:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.049 "hdgst": ${hdgst:-false}, 00:27:20.049 "ddgst": ${ddgst:-false} 00:27:20.049 }, 00:27:20.049 "method": "bdev_nvme_attach_controller" 00:27:20.049 } 00:27:20.049 EOF 00:27:20.049 )") 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.049 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.049 { 00:27:20.049 "params": { 00:27:20.049 "name": "Nvme$subsystem", 00:27:20.049 "trtype": "$TEST_TRANSPORT", 00:27:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.049 "adrfam": "ipv4", 00:27:20.049 "trsvcid": "$NVMF_PORT", 00:27:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.049 "hdgst": ${hdgst:-false}, 00:27:20.049 "ddgst": ${ddgst:-false} 00:27:20.049 }, 00:27:20.049 "method": "bdev_nvme_attach_controller" 00:27:20.049 } 00:27:20.049 EOF 00:27:20.049 )") 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.049 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.049 { 00:27:20.049 "params": { 00:27:20.049 "name": "Nvme$subsystem", 00:27:20.049 "trtype": "$TEST_TRANSPORT", 00:27:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.049 "adrfam": "ipv4", 00:27:20.049 "trsvcid": "$NVMF_PORT", 00:27:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.049 "hdgst": ${hdgst:-false}, 00:27:20.049 "ddgst": ${ddgst:-false} 00:27:20.049 }, 00:27:20.049 "method": "bdev_nvme_attach_controller" 00:27:20.049 } 00:27:20.049 EOF 00:27:20.049 )") 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.049 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.049 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.049 { 00:27:20.049 "params": { 00:27:20.049 "name": "Nvme$subsystem", 00:27:20.049 "trtype": "$TEST_TRANSPORT", 00:27:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.049 "adrfam": "ipv4", 00:27:20.049 "trsvcid": "$NVMF_PORT", 00:27:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.050 "hdgst": ${hdgst:-false}, 00:27:20.050 "ddgst": ${ddgst:-false} 00:27:20.050 }, 00:27:20.050 "method": "bdev_nvme_attach_controller" 00:27:20.050 } 00:27:20.050 EOF 00:27:20.050 )") 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.050 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.050 { 00:27:20.050 "params": { 00:27:20.050 "name": "Nvme$subsystem", 00:27:20.050 "trtype": "$TEST_TRANSPORT", 00:27:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.050 "adrfam": "ipv4", 00:27:20.050 "trsvcid": "$NVMF_PORT", 00:27:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.050 "hdgst": ${hdgst:-false}, 00:27:20.050 "ddgst": ${ddgst:-false} 00:27:20.050 }, 00:27:20.050 "method": "bdev_nvme_attach_controller" 00:27:20.050 } 00:27:20.050 EOF 00:27:20.050 )") 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.050 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.050 { 00:27:20.050 "params": { 00:27:20.050 "name": "Nvme$subsystem", 00:27:20.050 "trtype": "$TEST_TRANSPORT", 00:27:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.050 "adrfam": "ipv4", 00:27:20.050 "trsvcid": "$NVMF_PORT", 00:27:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.050 "hdgst": ${hdgst:-false}, 00:27:20.050 "ddgst": ${ddgst:-false} 00:27:20.050 }, 00:27:20.050 "method": "bdev_nvme_attach_controller" 00:27:20.050 } 00:27:20.050 EOF 00:27:20.050 )") 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.050 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.050 { 00:27:20.050 "params": { 00:27:20.050 "name": "Nvme$subsystem", 00:27:20.050 "trtype": "$TEST_TRANSPORT", 00:27:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.050 "adrfam": "ipv4", 00:27:20.050 "trsvcid": "$NVMF_PORT", 00:27:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.050 "hdgst": ${hdgst:-false}, 00:27:20.050 "ddgst": ${ddgst:-false} 00:27:20.050 }, 00:27:20.050 "method": "bdev_nvme_attach_controller" 00:27:20.050 } 00:27:20.050 EOF 00:27:20.050 )") 00:27:20.050 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.311 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.311 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.311 { 00:27:20.311 "params": { 00:27:20.311 "name": "Nvme$subsystem", 00:27:20.311 "trtype": "$TEST_TRANSPORT", 00:27:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.311 "adrfam": "ipv4", 00:27:20.311 "trsvcid": "$NVMF_PORT", 00:27:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.311 "hdgst": ${hdgst:-false}, 00:27:20.311 "ddgst": ${ddgst:-false} 00:27:20.311 }, 00:27:20.311 "method": "bdev_nvme_attach_controller" 00:27:20.311 } 00:27:20.311 EOF 00:27:20.311 )") 00:27:20.311 [2024-07-26 13:38:17.526955] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:20.311 [2024-07-26 13:38:17.527017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091017 ] 00:27:20.311 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.311 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.311 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.311 { 00:27:20.311 "params": { 00:27:20.311 "name": "Nvme$subsystem", 00:27:20.311 "trtype": "$TEST_TRANSPORT", 00:27:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.311 "adrfam": "ipv4", 00:27:20.311 "trsvcid": "$NVMF_PORT", 00:27:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.311 "hdgst": ${hdgst:-false}, 00:27:20.311 "ddgst": ${ddgst:-false} 00:27:20.311 }, 00:27:20.311 "method": "bdev_nvme_attach_controller" 00:27:20.311 } 00:27:20.311 EOF 00:27:20.311 )") 00:27:20.311 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.311 13:38:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:20.311 13:38:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:20.311 { 00:27:20.311 "params": { 00:27:20.311 "name": "Nvme$subsystem", 00:27:20.311 "trtype": "$TEST_TRANSPORT", 00:27:20.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.311 "adrfam": "ipv4", 00:27:20.311 "trsvcid": "$NVMF_PORT", 00:27:20.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.311 "hdgst": ${hdgst:-false}, 00:27:20.311 "ddgst": ${ddgst:-false} 00:27:20.311 }, 00:27:20.311 "method": "bdev_nvme_attach_controller" 00:27:20.311 } 00:27:20.311 EOF 00:27:20.312 )") 00:27:20.312 13:38:17 -- nvmf/common.sh@542 -- # cat 00:27:20.312 13:38:17 -- nvmf/common.sh@544 -- # jq . 00:27:20.312 13:38:17 -- nvmf/common.sh@545 -- # IFS=, 00:27:20.312 13:38:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme1", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme2", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme3", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme4", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme5", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme6", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme7", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme8", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme9", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 },{ 00:27:20.312 "params": { 00:27:20.312 "name": "Nvme10", 00:27:20.312 "trtype": "tcp", 00:27:20.312 "traddr": "10.0.0.2", 00:27:20.312 "adrfam": "ipv4", 00:27:20.312 "trsvcid": "4420", 00:27:20.312 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:20.312 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:20.312 "hdgst": false, 00:27:20.312 "ddgst": false 00:27:20.312 }, 00:27:20.312 "method": "bdev_nvme_attach_controller" 00:27:20.312 }' 00:27:20.312 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.312 [2024-07-26 13:38:17.588719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.312 [2024-07-26 13:38:17.617564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.697 Running I/O for 1 seconds... 00:27:23.077 00:27:23.077 Latency(us) 00:27:23.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.078 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme1n1 : 1.06 410.61 25.66 0.00 0.00 152058.87 34734.08 145053.01 00:27:23.078 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme2n1 : 1.12 358.74 22.42 0.00 0.00 168005.25 20425.39 163403.09 00:27:23.078 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme3n1 : 1.08 451.28 28.20 0.00 0.00 137736.12 10321.92 128450.56 00:27:23.078 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme4n1 : 1.07 414.39 25.90 0.00 0.00 147879.87 8519.68 139810.13 00:27:23.078 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme5n1 : 1.12 431.51 26.97 0.00 0.00 137167.55 14636.37 109663.57 00:27:23.078 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme6n1 : 1.10 438.79 27.42 0.00 0.00 138594.93 14417.92 124081.49 00:27:23.078 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme7n1 : 1.13 428.46 26.78 0.00 0.00 136060.87 10594.99 113595.73 00:27:23.078 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme8n1 : 1.09 446.57 27.91 0.00 0.00 134224.26 12014.93 119712.43 00:27:23.078 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme9n1 : 1.09 364.02 22.75 0.00 0.00 163528.51 14090.24 147674.45 00:27:23.078 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:23.078 Verification LBA range: start 0x0 length 0x400 00:27:23.078 Nvme10n1 : 1.10 398.44 24.90 0.00 0.00 148684.61 7591.25 150295.89 00:27:23.078 =================================================================================================================== 00:27:23.078 Total : 4142.81 258.93 0.00 0.00 145584.09 7591.25 163403.09 00:27:23.078 13:38:20 -- target/shutdown.sh@93 -- # stoptarget 00:27:23.078 13:38:20 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:23.078 13:38:20 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:23.078 13:38:20 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:23.078 13:38:20 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:23.078 13:38:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:23.078 13:38:20 -- nvmf/common.sh@116 -- # sync 00:27:23.078 13:38:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:23.078 13:38:20 -- nvmf/common.sh@119 -- # set +e 00:27:23.078 13:38:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:23.078 13:38:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:23.078 rmmod nvme_tcp 00:27:23.078 rmmod nvme_fabrics 00:27:23.078 rmmod nvme_keyring 00:27:23.078 13:38:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:23.078 13:38:20 -- nvmf/common.sh@123 -- # set -e 00:27:23.078 13:38:20 -- nvmf/common.sh@124 -- # return 0 00:27:23.078 13:38:20 -- nvmf/common.sh@477 -- # '[' -n 1089922 ']' 00:27:23.078 13:38:20 -- nvmf/common.sh@478 -- # killprocess 1089922 00:27:23.078 13:38:20 -- common/autotest_common.sh@926 -- # '[' -z 1089922 ']' 00:27:23.078 13:38:20 -- common/autotest_common.sh@930 -- # kill -0 1089922 00:27:23.078 13:38:20 -- common/autotest_common.sh@931 -- # uname 00:27:23.078 13:38:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:23.078 13:38:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1089922 00:27:23.078 13:38:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:23.078 13:38:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:23.078 13:38:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1089922' 00:27:23.078 killing process with pid 1089922 00:27:23.078 13:38:20 -- common/autotest_common.sh@945 -- # kill 1089922 00:27:23.078 13:38:20 -- common/autotest_common.sh@950 -- # wait 1089922 00:27:23.338 13:38:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:23.338 13:38:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:23.338 13:38:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:23.339 13:38:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.339 13:38:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:23.339 13:38:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.339 13:38:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.339 13:38:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.884 13:38:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:25.884 00:27:25.884 real 0m16.953s 00:27:25.884 user 0m36.901s 00:27:25.884 sys 0m6.465s 00:27:25.884 13:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.884 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 ************************************ 00:27:25.884 END TEST nvmf_shutdown_tc1 00:27:25.884 ************************************ 00:27:25.884 13:38:22 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:25.884 13:38:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:25.884 13:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:25.884 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 ************************************ 00:27:25.884 START TEST nvmf_shutdown_tc2 00:27:25.884 ************************************ 00:27:25.884 13:38:22 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:25.884 13:38:22 -- target/shutdown.sh@98 -- # starttarget 00:27:25.884 13:38:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:25.884 13:38:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:25.884 13:38:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.884 13:38:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:25.884 13:38:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:25.884 13:38:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:25.884 13:38:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.884 13:38:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.884 13:38:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.884 13:38:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:25.884 13:38:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:25.884 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:27:25.884 13:38:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:25.884 13:38:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:25.884 13:38:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:25.884 13:38:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:25.884 13:38:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:25.884 13:38:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:25.884 13:38:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:25.884 13:38:22 -- nvmf/common.sh@294 -- # net_devs=() 00:27:25.884 13:38:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:25.884 13:38:22 -- nvmf/common.sh@295 -- # e810=() 00:27:25.884 13:38:22 -- nvmf/common.sh@295 -- # local -ga e810 00:27:25.884 13:38:22 -- nvmf/common.sh@296 -- # x722=() 00:27:25.884 13:38:22 -- nvmf/common.sh@296 -- # local -ga x722 00:27:25.884 13:38:22 -- nvmf/common.sh@297 -- # mlx=() 00:27:25.884 13:38:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:25.884 13:38:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.884 13:38:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:25.884 13:38:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:25.884 13:38:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.884 13:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:25.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:25.884 13:38:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.884 13:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:25.884 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:25.884 13:38:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.884 13:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.884 13:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.884 13:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:25.884 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:25.884 13:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.884 13:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.884 13:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.884 13:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.884 13:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:25.884 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:25.884 13:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.884 13:38:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:25.884 13:38:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:25.884 13:38:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:25.884 13:38:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.884 13:38:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.884 13:38:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.884 13:38:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:25.884 13:38:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.884 13:38:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.884 13:38:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:25.884 13:38:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.884 13:38:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.884 13:38:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:25.884 13:38:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:25.884 13:38:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.884 13:38:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.884 13:38:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.884 13:38:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.884 13:38:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:25.884 13:38:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.884 13:38:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.884 13:38:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.884 13:38:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:25.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:27:25.884 00:27:25.884 --- 10.0.0.2 ping statistics --- 00:27:25.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.884 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:27:25.884 13:38:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:27:25.884 00:27:25.884 --- 10.0.0.1 ping statistics --- 00:27:25.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.884 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:27:25.884 13:38:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.884 13:38:23 -- nvmf/common.sh@410 -- # return 0 00:27:25.884 13:38:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.884 13:38:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.884 13:38:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:25.884 13:38:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:25.884 13:38:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.884 13:38:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:25.884 13:38:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:25.884 13:38:23 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:25.885 13:38:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:25.885 13:38:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:25.885 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:27:25.885 13:38:23 -- nvmf/common.sh@469 -- # nvmfpid=1092142 00:27:25.885 13:38:23 -- nvmf/common.sh@470 -- # waitforlisten 1092142 00:27:25.885 13:38:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:25.885 13:38:23 -- common/autotest_common.sh@819 -- # '[' -z 1092142 ']' 00:27:25.885 13:38:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.885 13:38:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:25.885 13:38:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.885 13:38:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:25.885 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:27:25.885 [2024-07-26 13:38:23.242743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:25.885 [2024-07-26 13:38:23.242808] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.885 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.885 [2024-07-26 13:38:23.327489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.146 [2024-07-26 13:38:23.359701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:26.146 [2024-07-26 13:38:23.359817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.146 [2024-07-26 13:38:23.359824] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.146 [2024-07-26 13:38:23.359831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.146 [2024-07-26 13:38:23.359944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.146 [2024-07-26 13:38:23.360104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.146 [2024-07-26 13:38:23.360242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.146 [2024-07-26 13:38:23.360244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.718 13:38:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.718 13:38:24 -- common/autotest_common.sh@852 -- # return 0 00:27:26.718 13:38:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:26.718 13:38:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:26.718 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:26.718 13:38:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.718 13:38:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.718 13:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.718 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:26.718 [2024-07-26 13:38:24.050333] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.718 13:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.718 13:38:24 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:26.718 13:38:24 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:26.718 13:38:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:26.718 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:26.718 13:38:24 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.718 13:38:24 -- target/shutdown.sh@28 -- # cat 00:27:26.718 13:38:24 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:26.718 13:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.718 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:26.718 Malloc1 00:27:26.718 [2024-07-26 13:38:24.149087] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.718 Malloc2 00:27:26.978 Malloc3 00:27:26.978 Malloc4 00:27:26.978 Malloc5 00:27:26.978 Malloc6 00:27:26.978 Malloc7 00:27:26.978 Malloc8 00:27:26.978 Malloc9 00:27:27.239 Malloc10 00:27:27.239 13:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.239 13:38:24 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:27.239 13:38:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:27.239 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:27.239 13:38:24 -- target/shutdown.sh@102 -- # perfpid=1092530 00:27:27.239 13:38:24 -- target/shutdown.sh@103 -- # waitforlisten 1092530 /var/tmp/bdevperf.sock 00:27:27.239 13:38:24 -- common/autotest_common.sh@819 -- # '[' -z 1092530 ']' 00:27:27.239 13:38:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.239 13:38:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:27.239 13:38:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.239 13:38:24 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:27.239 13:38:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:27.239 13:38:24 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:27.239 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:27:27.239 13:38:24 -- nvmf/common.sh@520 -- # config=() 00:27:27.239 13:38:24 -- nvmf/common.sh@520 -- # local subsystem config 00:27:27.239 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.239 { 00:27:27.239 "params": { 00:27:27.239 "name": "Nvme$subsystem", 00:27:27.239 "trtype": "$TEST_TRANSPORT", 00:27:27.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.239 "adrfam": "ipv4", 00:27:27.239 "trsvcid": "$NVMF_PORT", 00:27:27.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.239 "hdgst": ${hdgst:-false}, 00:27:27.239 "ddgst": ${ddgst:-false} 00:27:27.239 }, 00:27:27.239 "method": "bdev_nvme_attach_controller" 00:27:27.239 } 00:27:27.239 EOF 00:27:27.239 )") 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.239 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.239 { 00:27:27.239 "params": { 00:27:27.239 "name": "Nvme$subsystem", 00:27:27.239 "trtype": "$TEST_TRANSPORT", 00:27:27.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.239 "adrfam": "ipv4", 00:27:27.239 "trsvcid": "$NVMF_PORT", 00:27:27.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.239 "hdgst": ${hdgst:-false}, 00:27:27.239 "ddgst": ${ddgst:-false} 00:27:27.239 }, 00:27:27.239 "method": "bdev_nvme_attach_controller" 00:27:27.239 } 00:27:27.239 EOF 00:27:27.239 )") 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.239 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.239 { 00:27:27.239 "params": { 00:27:27.239 "name": "Nvme$subsystem", 00:27:27.239 "trtype": "$TEST_TRANSPORT", 00:27:27.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.239 "adrfam": "ipv4", 00:27:27.239 "trsvcid": "$NVMF_PORT", 00:27:27.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.239 "hdgst": ${hdgst:-false}, 00:27:27.239 "ddgst": ${ddgst:-false} 00:27:27.239 }, 00:27:27.239 "method": "bdev_nvme_attach_controller" 00:27:27.239 } 00:27:27.239 EOF 00:27:27.239 )") 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.239 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.239 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.239 { 00:27:27.239 "params": { 00:27:27.239 "name": "Nvme$subsystem", 00:27:27.239 "trtype": "$TEST_TRANSPORT", 00:27:27.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.239 "adrfam": "ipv4", 00:27:27.239 "trsvcid": "$NVMF_PORT", 00:27:27.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.239 "hdgst": ${hdgst:-false}, 00:27:27.239 "ddgst": ${ddgst:-false} 00:27:27.239 }, 00:27:27.239 "method": "bdev_nvme_attach_controller" 00:27:27.239 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 [2024-07-26 13:38:24.587927] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:27.240 [2024-07-26 13:38:24.587979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092530 ] 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.240 { 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme$subsystem", 00:27:27.240 "trtype": "$TEST_TRANSPORT", 00:27:27.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "$NVMF_PORT", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.240 "hdgst": ${hdgst:-false}, 00:27:27.240 "ddgst": ${ddgst:-false} 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 } 00:27:27.240 EOF 00:27:27.240 )") 00:27:27.240 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.240 13:38:24 -- nvmf/common.sh@542 -- # cat 00:27:27.240 13:38:24 -- nvmf/common.sh@544 -- # jq . 00:27:27.240 13:38:24 -- nvmf/common.sh@545 -- # IFS=, 00:27:27.240 13:38:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme1", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme2", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme3", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme4", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme5", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme6", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme7", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.240 },{ 00:27:27.240 "params": { 00:27:27.240 "name": "Nvme8", 00:27:27.240 "trtype": "tcp", 00:27:27.240 "traddr": "10.0.0.2", 00:27:27.240 "adrfam": "ipv4", 00:27:27.240 "trsvcid": "4420", 00:27:27.240 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:27.240 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:27.240 "hdgst": false, 00:27:27.240 "ddgst": false 00:27:27.240 }, 00:27:27.240 "method": "bdev_nvme_attach_controller" 00:27:27.241 },{ 00:27:27.241 "params": { 00:27:27.241 "name": "Nvme9", 00:27:27.241 "trtype": "tcp", 00:27:27.241 "traddr": "10.0.0.2", 00:27:27.241 "adrfam": "ipv4", 00:27:27.241 "trsvcid": "4420", 00:27:27.241 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:27.241 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:27.241 "hdgst": false, 00:27:27.241 "ddgst": false 00:27:27.241 }, 00:27:27.241 "method": "bdev_nvme_attach_controller" 00:27:27.241 },{ 00:27:27.241 "params": { 00:27:27.241 "name": "Nvme10", 00:27:27.241 "trtype": "tcp", 00:27:27.241 "traddr": "10.0.0.2", 00:27:27.241 "adrfam": "ipv4", 00:27:27.241 "trsvcid": "4420", 00:27:27.241 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:27.241 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:27.241 "hdgst": false, 00:27:27.241 "ddgst": false 00:27:27.241 }, 00:27:27.241 "method": "bdev_nvme_attach_controller" 00:27:27.241 }' 00:27:27.241 [2024-07-26 13:38:24.647610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.241 [2024-07-26 13:38:24.676820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.193 Running I/O for 10 seconds... 00:27:29.454 13:38:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:29.454 13:38:26 -- common/autotest_common.sh@852 -- # return 0 00:27:29.454 13:38:26 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:29.454 13:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.454 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 13:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.454 13:38:26 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:29.454 13:38:26 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:29.454 13:38:26 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:29.454 13:38:26 -- target/shutdown.sh@57 -- # local ret=1 00:27:29.454 13:38:26 -- target/shutdown.sh@58 -- # local i 00:27:29.454 13:38:26 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:29.454 13:38:26 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:29.454 13:38:26 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:29.454 13:38:26 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:29.454 13:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.454 13:38:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.454 13:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.454 13:38:26 -- target/shutdown.sh@60 -- # read_io_count=129 00:27:29.454 13:38:26 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:27:29.454 13:38:26 -- target/shutdown.sh@64 -- # ret=0 00:27:29.454 13:38:26 -- target/shutdown.sh@65 -- # break 00:27:29.454 13:38:26 -- target/shutdown.sh@69 -- # return 0 00:27:29.454 13:38:26 -- target/shutdown.sh@109 -- # killprocess 1092530 00:27:29.454 13:38:26 -- common/autotest_common.sh@926 -- # '[' -z 1092530 ']' 00:27:29.454 13:38:26 -- common/autotest_common.sh@930 -- # kill -0 1092530 00:27:29.454 13:38:26 -- common/autotest_common.sh@931 -- # uname 00:27:29.454 13:38:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.454 13:38:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1092530 00:27:29.454 13:38:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.454 13:38:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.454 13:38:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1092530' 00:27:29.454 killing process with pid 1092530 00:27:29.454 13:38:26 -- common/autotest_common.sh@945 -- # kill 1092530 00:27:29.454 13:38:26 -- common/autotest_common.sh@950 -- # wait 1092530 00:27:29.454 Received shutdown signal, test time was about 0.556017 seconds 00:27:29.454 00:27:29.455 Latency(us) 00:27:29.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.455 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme1n1 : 0.52 434.91 27.18 0.00 0.00 143126.10 12779.52 119712.43 00:27:29.455 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme2n1 : 0.56 486.37 30.40 0.00 0.00 126390.52 9775.79 117964.80 00:27:29.455 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme3n1 : 0.55 345.46 21.59 0.00 0.00 173554.27 19114.67 169519.79 00:27:29.455 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme4n1 : 0.54 357.92 22.37 0.00 0.00 164124.44 12834.13 146800.64 00:27:29.455 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme5n1 : 0.55 347.96 21.75 0.00 0.00 167254.91 8028.16 156412.59 00:27:29.455 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme6n1 : 0.53 426.45 26.65 0.00 0.00 133656.07 21299.20 109663.57 00:27:29.455 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme7n1 : 0.55 416.57 26.04 0.00 0.00 136088.28 15400.96 123207.68 00:27:29.455 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme8n1 : 0.53 430.34 26.90 0.00 0.00 129089.39 13871.79 116217.17 00:27:29.455 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme9n1 : 0.55 344.28 21.52 0.00 0.00 159497.95 17913.17 145053.01 00:27:29.455 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.455 Verification LBA range: start 0x0 length 0x400 00:27:29.455 Nvme10n1 : 0.54 667.69 41.73 0.00 0.00 80785.72 8956.59 106168.32 00:27:29.455 =================================================================================================================== 00:27:29.455 Total : 4257.95 266.12 0.00 0.00 135952.72 8028.16 169519.79 00:27:29.715 13:38:26 -- target/shutdown.sh@112 -- # sleep 1 00:27:30.656 13:38:27 -- target/shutdown.sh@113 -- # kill -0 1092142 00:27:30.656 13:38:27 -- target/shutdown.sh@115 -- # stoptarget 00:27:30.656 13:38:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:30.656 13:38:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:30.656 13:38:28 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.656 13:38:28 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:30.656 13:38:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:30.656 13:38:28 -- nvmf/common.sh@116 -- # sync 00:27:30.656 13:38:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:30.656 13:38:28 -- nvmf/common.sh@119 -- # set +e 00:27:30.656 13:38:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:30.656 13:38:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:30.656 rmmod nvme_tcp 00:27:30.656 rmmod nvme_fabrics 00:27:30.656 rmmod nvme_keyring 00:27:30.656 13:38:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:30.656 13:38:28 -- nvmf/common.sh@123 -- # set -e 00:27:30.656 13:38:28 -- nvmf/common.sh@124 -- # return 0 00:27:30.656 13:38:28 -- nvmf/common.sh@477 -- # '[' -n 1092142 ']' 00:27:30.656 13:38:28 -- nvmf/common.sh@478 -- # killprocess 1092142 00:27:30.656 13:38:28 -- common/autotest_common.sh@926 -- # '[' -z 1092142 ']' 00:27:30.656 13:38:28 -- common/autotest_common.sh@930 -- # kill -0 1092142 00:27:30.656 13:38:28 -- common/autotest_common.sh@931 -- # uname 00:27:30.656 13:38:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.656 13:38:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1092142 00:27:30.656 13:38:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:30.917 13:38:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:30.917 13:38:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1092142' 00:27:30.917 killing process with pid 1092142 00:27:30.917 13:38:28 -- common/autotest_common.sh@945 -- # kill 1092142 00:27:30.917 13:38:28 -- common/autotest_common.sh@950 -- # wait 1092142 00:27:30.917 13:38:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:30.917 13:38:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:30.917 13:38:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:30.917 13:38:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.917 13:38:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:30.917 13:38:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.917 13:38:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.917 13:38:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.465 13:38:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:33.465 00:27:33.465 real 0m7.616s 00:27:33.465 user 0m22.713s 00:27:33.465 sys 0m1.180s 00:27:33.465 13:38:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.465 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:27:33.465 ************************************ 00:27:33.465 END TEST nvmf_shutdown_tc2 00:27:33.465 ************************************ 00:27:33.465 13:38:30 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:33.465 13:38:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.465 13:38:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.465 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:27:33.465 ************************************ 00:27:33.465 START TEST nvmf_shutdown_tc3 00:27:33.465 ************************************ 00:27:33.465 13:38:30 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:33.465 13:38:30 -- target/shutdown.sh@120 -- # starttarget 00:27:33.465 13:38:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:33.465 13:38:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:33.465 13:38:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.465 13:38:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:33.465 13:38:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:33.465 13:38:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:33.465 13:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.465 13:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.465 13:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.465 13:38:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:33.465 13:38:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:33.465 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:27:33.465 13:38:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:33.465 13:38:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:33.465 13:38:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:33.465 13:38:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:33.465 13:38:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:33.465 13:38:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:33.465 13:38:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:33.465 13:38:30 -- nvmf/common.sh@294 -- # net_devs=() 00:27:33.465 13:38:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:33.465 13:38:30 -- nvmf/common.sh@295 -- # e810=() 00:27:33.465 13:38:30 -- nvmf/common.sh@295 -- # local -ga e810 00:27:33.465 13:38:30 -- nvmf/common.sh@296 -- # x722=() 00:27:33.465 13:38:30 -- nvmf/common.sh@296 -- # local -ga x722 00:27:33.465 13:38:30 -- nvmf/common.sh@297 -- # mlx=() 00:27:33.465 13:38:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:33.465 13:38:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.465 13:38:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:33.465 13:38:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:33.465 13:38:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:33.465 13:38:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:33.465 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:33.465 13:38:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:33.465 13:38:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:33.465 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:33.465 13:38:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:33.465 13:38:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.465 13:38:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.465 13:38:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:33.465 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:33.465 13:38:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.465 13:38:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:33.465 13:38:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.465 13:38:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.465 13:38:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:33.465 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:33.465 13:38:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.465 13:38:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:33.465 13:38:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:33.465 13:38:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:33.465 13:38:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.465 13:38:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.465 13:38:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.465 13:38:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:33.465 13:38:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.465 13:38:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.465 13:38:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:33.465 13:38:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.465 13:38:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.466 13:38:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:33.466 13:38:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:33.466 13:38:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.466 13:38:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.466 13:38:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.466 13:38:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.466 13:38:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:33.466 13:38:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.466 13:38:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.466 13:38:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.466 13:38:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:33.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:27:33.466 00:27:33.466 --- 10.0.0.2 ping statistics --- 00:27:33.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.466 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:27:33.466 13:38:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:27:33.466 00:27:33.466 --- 10.0.0.1 ping statistics --- 00:27:33.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.466 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:33.466 13:38:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.466 13:38:30 -- nvmf/common.sh@410 -- # return 0 00:27:33.466 13:38:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:33.466 13:38:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.466 13:38:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:33.466 13:38:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:33.466 13:38:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.466 13:38:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:33.466 13:38:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:33.466 13:38:30 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:33.466 13:38:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:33.466 13:38:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:33.466 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:27:33.466 13:38:30 -- nvmf/common.sh@469 -- # nvmfpid=1093771 00:27:33.466 13:38:30 -- nvmf/common.sh@470 -- # waitforlisten 1093771 00:27:33.466 13:38:30 -- common/autotest_common.sh@819 -- # '[' -z 1093771 ']' 00:27:33.466 13:38:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:33.466 13:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.466 13:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:33.466 13:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.466 13:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:33.466 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:27:33.466 [2024-07-26 13:38:30.922766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:33.466 [2024-07-26 13:38:30.922833] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.727 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.727 [2024-07-26 13:38:31.009978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.727 [2024-07-26 13:38:31.051096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:33.727 [2024-07-26 13:38:31.051236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.727 [2024-07-26 13:38:31.051245] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.727 [2024-07-26 13:38:31.051251] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.727 [2024-07-26 13:38:31.051435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.727 [2024-07-26 13:38:31.051603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.727 [2024-07-26 13:38:31.051739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.727 [2024-07-26 13:38:31.051741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:34.299 13:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:34.299 13:38:31 -- common/autotest_common.sh@852 -- # return 0 00:27:34.299 13:38:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:34.299 13:38:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:34.299 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:27:34.299 13:38:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.299 13:38:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.299 13:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.299 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:27:34.299 [2024-07-26 13:38:31.737273] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.299 13:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.299 13:38:31 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:34.299 13:38:31 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:34.299 13:38:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:34.299 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:27:34.299 13:38:31 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.299 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.299 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.299 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.299 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.299 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.299 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.300 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.300 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.300 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.300 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.560 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.560 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.560 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.560 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.560 13:38:31 -- target/shutdown.sh@28 -- # cat 00:27:34.560 13:38:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:34.560 13:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.560 13:38:31 -- common/autotest_common.sh@10 -- # set +x 00:27:34.560 Malloc1 00:27:34.560 [2024-07-26 13:38:31.836090] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.560 Malloc2 00:27:34.560 Malloc3 00:27:34.560 Malloc4 00:27:34.560 Malloc5 00:27:34.560 Malloc6 00:27:34.820 Malloc7 00:27:34.820 Malloc8 00:27:34.820 Malloc9 00:27:34.820 Malloc10 00:27:34.820 13:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.820 13:38:32 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:34.820 13:38:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:34.820 13:38:32 -- common/autotest_common.sh@10 -- # set +x 00:27:34.820 13:38:32 -- target/shutdown.sh@124 -- # perfpid=1094068 00:27:34.820 13:38:32 -- target/shutdown.sh@125 -- # waitforlisten 1094068 /var/tmp/bdevperf.sock 00:27:34.820 13:38:32 -- common/autotest_common.sh@819 -- # '[' -z 1094068 ']' 00:27:34.820 13:38:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.820 13:38:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:34.820 13:38:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.820 13:38:32 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:34.820 13:38:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:34.820 13:38:32 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:34.820 13:38:32 -- common/autotest_common.sh@10 -- # set +x 00:27:34.820 13:38:32 -- nvmf/common.sh@520 -- # config=() 00:27:34.820 13:38:32 -- nvmf/common.sh@520 -- # local subsystem config 00:27:34.820 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.820 { 00:27:34.820 "params": { 00:27:34.820 "name": "Nvme$subsystem", 00:27:34.820 "trtype": "$TEST_TRANSPORT", 00:27:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.820 "adrfam": "ipv4", 00:27:34.820 "trsvcid": "$NVMF_PORT", 00:27:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.820 "hdgst": ${hdgst:-false}, 00:27:34.820 "ddgst": ${ddgst:-false} 00:27:34.820 }, 00:27:34.820 "method": "bdev_nvme_attach_controller" 00:27:34.820 } 00:27:34.820 EOF 00:27:34.820 )") 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.820 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.820 { 00:27:34.820 "params": { 00:27:34.820 "name": "Nvme$subsystem", 00:27:34.820 "trtype": "$TEST_TRANSPORT", 00:27:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.820 "adrfam": "ipv4", 00:27:34.820 "trsvcid": "$NVMF_PORT", 00:27:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.820 "hdgst": ${hdgst:-false}, 00:27:34.820 "ddgst": ${ddgst:-false} 00:27:34.820 }, 00:27:34.820 "method": "bdev_nvme_attach_controller" 00:27:34.820 } 00:27:34.820 EOF 00:27:34.820 )") 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.820 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.820 { 00:27:34.820 "params": { 00:27:34.820 "name": "Nvme$subsystem", 00:27:34.820 "trtype": "$TEST_TRANSPORT", 00:27:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.820 "adrfam": "ipv4", 00:27:34.820 "trsvcid": "$NVMF_PORT", 00:27:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.820 "hdgst": ${hdgst:-false}, 00:27:34.820 "ddgst": ${ddgst:-false} 00:27:34.820 }, 00:27:34.820 "method": "bdev_nvme_attach_controller" 00:27:34.820 } 00:27:34.820 EOF 00:27:34.820 )") 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.820 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.820 { 00:27:34.820 "params": { 00:27:34.820 "name": "Nvme$subsystem", 00:27:34.820 "trtype": "$TEST_TRANSPORT", 00:27:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.820 "adrfam": "ipv4", 00:27:34.820 "trsvcid": "$NVMF_PORT", 00:27:34.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.820 "hdgst": ${hdgst:-false}, 00:27:34.820 "ddgst": ${ddgst:-false} 00:27:34.820 }, 00:27:34.820 "method": "bdev_nvme_attach_controller" 00:27:34.820 } 00:27:34.820 EOF 00:27:34.820 )") 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.820 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.820 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.820 { 00:27:34.820 "params": { 00:27:34.820 "name": "Nvme$subsystem", 00:27:34.820 "trtype": "$TEST_TRANSPORT", 00:27:34.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.820 "adrfam": "ipv4", 00:27:34.821 "trsvcid": "$NVMF_PORT", 00:27:34.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.821 "hdgst": ${hdgst:-false}, 00:27:34.821 "ddgst": ${ddgst:-false} 00:27:34.821 }, 00:27:34.821 "method": "bdev_nvme_attach_controller" 00:27:34.821 } 00:27:34.821 EOF 00:27:34.821 )") 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.821 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.821 { 00:27:34.821 "params": { 00:27:34.821 "name": "Nvme$subsystem", 00:27:34.821 "trtype": "$TEST_TRANSPORT", 00:27:34.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.821 "adrfam": "ipv4", 00:27:34.821 "trsvcid": "$NVMF_PORT", 00:27:34.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.821 "hdgst": ${hdgst:-false}, 00:27:34.821 "ddgst": ${ddgst:-false} 00:27:34.821 }, 00:27:34.821 "method": "bdev_nvme_attach_controller" 00:27:34.821 } 00:27:34.821 EOF 00:27:34.821 )") 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.821 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.821 { 00:27:34.821 "params": { 00:27:34.821 "name": "Nvme$subsystem", 00:27:34.821 "trtype": "$TEST_TRANSPORT", 00:27:34.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.821 "adrfam": "ipv4", 00:27:34.821 "trsvcid": "$NVMF_PORT", 00:27:34.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.821 "hdgst": ${hdgst:-false}, 00:27:34.821 "ddgst": ${ddgst:-false} 00:27:34.821 }, 00:27:34.821 "method": "bdev_nvme_attach_controller" 00:27:34.821 } 00:27:34.821 EOF 00:27:34.821 )") 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.821 [2024-07-26 13:38:32.281579] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:34.821 [2024-07-26 13:38:32.281641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094068 ] 00:27:34.821 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.821 { 00:27:34.821 "params": { 00:27:34.821 "name": "Nvme$subsystem", 00:27:34.821 "trtype": "$TEST_TRANSPORT", 00:27:34.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.821 "adrfam": "ipv4", 00:27:34.821 "trsvcid": "$NVMF_PORT", 00:27:34.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.821 "hdgst": ${hdgst:-false}, 00:27:34.821 "ddgst": ${ddgst:-false} 00:27:34.821 }, 00:27:34.821 "method": "bdev_nvme_attach_controller" 00:27:34.821 } 00:27:34.821 EOF 00:27:34.821 )") 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:34.821 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:34.821 { 00:27:34.821 "params": { 00:27:34.821 "name": "Nvme$subsystem", 00:27:34.821 "trtype": "$TEST_TRANSPORT", 00:27:34.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.821 "adrfam": "ipv4", 00:27:34.821 "trsvcid": "$NVMF_PORT", 00:27:34.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.821 "hdgst": ${hdgst:-false}, 00:27:34.821 "ddgst": ${ddgst:-false} 00:27:34.821 }, 00:27:34.821 "method": "bdev_nvme_attach_controller" 00:27:34.821 } 00:27:34.821 EOF 00:27:34.821 )") 00:27:34.821 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:35.082 13:38:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.082 13:38:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.082 { 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme$subsystem", 00:27:35.082 "trtype": "$TEST_TRANSPORT", 00:27:35.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "$NVMF_PORT", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.082 "hdgst": ${hdgst:-false}, 00:27:35.082 "ddgst": ${ddgst:-false} 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 } 00:27:35.082 EOF 00:27:35.082 )") 00:27:35.082 13:38:32 -- nvmf/common.sh@542 -- # cat 00:27:35.082 13:38:32 -- nvmf/common.sh@544 -- # jq . 00:27:35.082 13:38:32 -- nvmf/common.sh@545 -- # IFS=, 00:27:35.082 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.082 13:38:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme1", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme2", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme3", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme4", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme5", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme6", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme7", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme8", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme9", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 },{ 00:27:35.082 "params": { 00:27:35.082 "name": "Nvme10", 00:27:35.082 "trtype": "tcp", 00:27:35.082 "traddr": "10.0.0.2", 00:27:35.082 "adrfam": "ipv4", 00:27:35.082 "trsvcid": "4420", 00:27:35.082 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.082 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.082 "hdgst": false, 00:27:35.082 "ddgst": false 00:27:35.082 }, 00:27:35.082 "method": "bdev_nvme_attach_controller" 00:27:35.082 }' 00:27:35.082 [2024-07-26 13:38:32.342403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.082 [2024-07-26 13:38:32.371907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.467 Running I/O for 10 seconds... 00:27:37.050 13:38:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:37.050 13:38:34 -- common/autotest_common.sh@852 -- # return 0 00:27:37.050 13:38:34 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:37.050 13:38:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.050 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:27:37.050 13:38:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.050 13:38:34 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.050 13:38:34 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:37.050 13:38:34 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:37.050 13:38:34 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:37.050 13:38:34 -- target/shutdown.sh@57 -- # local ret=1 00:27:37.050 13:38:34 -- target/shutdown.sh@58 -- # local i 00:27:37.050 13:38:34 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:37.050 13:38:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:37.050 13:38:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:37.050 13:38:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:37.050 13:38:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.050 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:27:37.050 13:38:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.050 13:38:34 -- target/shutdown.sh@60 -- # read_io_count=129 00:27:37.050 13:38:34 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:27:37.050 13:38:34 -- target/shutdown.sh@64 -- # ret=0 00:27:37.050 13:38:34 -- target/shutdown.sh@65 -- # break 00:27:37.050 13:38:34 -- target/shutdown.sh@69 -- # return 0 00:27:37.050 13:38:34 -- target/shutdown.sh@134 -- # killprocess 1093771 00:27:37.050 13:38:34 -- common/autotest_common.sh@926 -- # '[' -z 1093771 ']' 00:27:37.050 13:38:34 -- common/autotest_common.sh@930 -- # kill -0 1093771 00:27:37.050 13:38:34 -- common/autotest_common.sh@931 -- # uname 00:27:37.050 13:38:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:37.050 13:38:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1093771 00:27:37.050 13:38:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:37.050 13:38:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:37.050 13:38:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1093771' 00:27:37.050 killing process with pid 1093771 00:27:37.050 13:38:34 -- common/autotest_common.sh@945 -- # kill 1093771 00:27:37.051 13:38:34 -- common/autotest_common.sh@950 -- # wait 1093771 00:27:37.051 [2024-07-26 13:38:34.462240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.462524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b150 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.051 [2024-07-26 13:38:34.465585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.465648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157db00 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.052 [2024-07-26 13:38:34.466652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.466700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157b600 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.467875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22272 len:12the state(5) to be set 00:27:37.053 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.053 [2024-07-26 13:38:34.467900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.053 [2024-07-26 13:38:34.467903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.053 [2024-07-26 13:38:34.467907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.467918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.467923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.467933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.467938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:37.054 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.467948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.467953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.467964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.467975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.467980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.467990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.467996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:37.054 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23424 len:12[2024-07-26 13:38:34.468008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.468031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 13:38:34.468036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.468052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.468074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.468090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 [2024-07-26 13:38:34.468106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.468111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:37.054 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.054 [2024-07-26 13:38:34.468119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26240 len:12[2024-07-26 13:38:34.468124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.054 the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.054 [2024-07-26 13:38:34.468132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.468151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:37.055 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.468162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26496 len:1the state(5) to be set 00:27:37.055 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with [2024-07-26 13:38:34.468199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:12the state(5) to be set 00:27:37.055 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bab0 is same with the state(5) to be set 00:27:37.055 [2024-07-26 13:38:34.468219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.055 [2024-07-26 13:38:34.468466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.055 [2024-07-26 13:38:34.468475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.056 [2024-07-26 13:38:34.468578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.056 [2024-07-26 13:38:34.468637] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c7a1d0 was disconnected and freed. reset controller. 00:27:37.056 [2024-07-26 13:38:34.469031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.056 [2024-07-26 13:38:34.469163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157bf40 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.057 [2024-07-26 13:38:34.469879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.469997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c3f0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.058 [2024-07-26 13:38:34.470724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157c8a0 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.470956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.470977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.470987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.470998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f800 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.471065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1affb80 is same with the state(5) to be set 00:27:37.059 [2024-07-26 13:38:34.471149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.059 [2024-07-26 13:38:34.471173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.059 [2024-07-26 13:38:34.471181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8fe0 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5230 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af24d0 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.060 [2024-07-26 13:38:34.471473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.060 [2024-07-26 13:38:34.471480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a517d0 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.060 [2024-07-26 13:38:34.471678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.471780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157cd50 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.472397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.472412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.061 [2024-07-26 13:38:34.472867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.472988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.472998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.061 [2024-07-26 13:38:34.473170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.061 [2024-07-26 13:38:34.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.062 [2024-07-26 13:38:34.473624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.062 [2024-07-26 13:38:34.473633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.063 [2024-07-26 13:38:34.473721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.063 [2024-07-26 13:38:34.473730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.473923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.473971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.474021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.474068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.474170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.474233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.474279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.474373] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bb1a60 was disconnected and freed. reset controller. 00:27:37.064 [2024-07-26 13:38:34.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.064 [2024-07-26 13:38:34.475326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.064 [2024-07-26 13:38:34.475334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.475956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.476001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.476052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.476156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.476208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.065 [2024-07-26 13:38:34.476261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.065 [2024-07-26 13:38:34.484550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.065 [2024-07-26 13:38:34.484848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.066 [2024-07-26 13:38:34.484854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.066 [2024-07-26 13:38:34.484859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.066 [2024-07-26 13:38:34.484865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.066 [2024-07-26 13:38:34.484869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157d670 is same with the state(5) to be set 00:27:37.066 [2024-07-26 13:38:34.490956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.490997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.066 [2024-07-26 13:38:34.491623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.066 [2024-07-26 13:38:34.491630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.491776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.491785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492128] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d255b0 was disconnected and freed. reset controller. 00:27:37.067 [2024-07-26 13:38:34.492161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.067 [2024-07-26 13:38:34.492195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af24d0 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.492251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d3a0 is same with the state(5) to be set 00:27:37.067 [2024-07-26 13:38:34.492348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c352d0 is same with the state(5) to be set 00:27:37.067 [2024-07-26 13:38:34.492435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f800 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.492448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1affb80 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.492462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8fe0 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.492478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af5230 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.492502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eee0 is same with the state(5) to be set 00:27:37.067 [2024-07-26 13:38:34.492586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.067 [2024-07-26 13:38:34.492642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.492649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eab0 is same with the state(5) to be set 00:27:37.067 [2024-07-26 13:38:34.492667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a517d0 (9): Bad file descriptor 00:27:37.067 [2024-07-26 13:38:34.494157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.067 [2024-07-26 13:38:34.494291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.067 [2024-07-26 13:38:34.494298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.494631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.494680] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bb4620 was disconnected and freed. reset controller. 00:27:37.068 [2024-07-26 13:38:34.495907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:37.068 [2024-07-26 13:38:34.495927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eee0 (9): Bad file descriptor 00:27:37.068 [2024-07-26 13:38:34.497969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:37.068 [2024-07-26 13:38:34.498577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-07-26 13:38:34.499102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-07-26 13:38:34.499116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af24d0 with addr=10.0.0.2, port=4420 00:27:37.068 [2024-07-26 13:38:34.499126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af24d0 is same with the state(5) to be set 00:27:37.068 [2024-07-26 13:38:34.499225] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.068 [2024-07-26 13:38:34.499271] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.068 [2024-07-26 13:38:34.499650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.068 [2024-07-26 13:38:34.499899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.068 [2024-07-26 13:38:34.499907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.499917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.499925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.499935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.499943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.499953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.499961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.499971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.499979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.499989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.499998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.069 [2024-07-26 13:38:34.500467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.069 [2024-07-26 13:38:34.500477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.500575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.500635] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bb3040 was disconnected and freed. reset controller. 00:27:37.070 [2024-07-26 13:38:34.500970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:37.070 [2024-07-26 13:38:34.500987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2d3a0 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.501473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.502036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.502052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2eee0 with addr=10.0.0.2, port=4420 00:27:37.070 [2024-07-26 13:38:34.502062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eee0 is same with the state(5) to be set 00:27:37.070 [2024-07-26 13:38:34.502413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.502819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.502833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9f800 with addr=10.0.0.2, port=4420 00:27:37.070 [2024-07-26 13:38:34.502842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f800 is same with the state(5) to be set 00:27:37.070 [2024-07-26 13:38:34.502856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af24d0 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.502939] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.070 [2024-07-26 13:38:34.502983] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.070 [2024-07-26 13:38:34.504464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:37.070 [2024-07-26 13:38:34.504489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eab0 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.504508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eee0 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.504518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f800 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.504532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.070 [2024-07-26 13:38:34.504539] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.070 [2024-07-26 13:38:34.504547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.070 [2024-07-26 13:38:34.504579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c352d0 (9): Bad file descriptor 00:27:37.070 [2024-07-26 13:38:34.504706] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.070 [2024-07-26 13:38:34.504729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.070 [2024-07-26 13:38:34.505444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.505987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-07-26 13:38:34.506001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2d3a0 with addr=10.0.0.2, port=4420 00:27:37.070 [2024-07-26 13:38:34.506010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d3a0 is same with the state(5) to be set 00:27:37.070 [2024-07-26 13:38:34.506032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:37.070 [2024-07-26 13:38:34.506039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:37.070 [2024-07-26 13:38:34.506048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:37.070 [2024-07-26 13:38:34.506062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:37.070 [2024-07-26 13:38:34.506069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:37.070 [2024-07-26 13:38:34.506076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:37.070 [2024-07-26 13:38:34.506119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.070 [2024-07-26 13:38:34.506319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.070 [2024-07-26 13:38:34.506327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.071 [2024-07-26 13:38:34.506961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.071 [2024-07-26 13:38:34.506970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.506979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.506988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.507274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.507283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae1f80 is same with the state(5) to be set 00:27:37.072 [2024-07-26 13:38:34.508537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.072 [2024-07-26 13:38:34.508892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.072 [2024-07-26 13:38:34.508902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.508920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.508955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.508972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.508990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.508998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.073 [2024-07-26 13:38:34.509543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.073 [2024-07-26 13:38:34.509550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.509690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.509699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c19290 is same with the state(5) to be set 00:27:37.074 [2024-07-26 13:38:34.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.510965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.510976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.510984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.510996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.074 [2024-07-26 13:38:34.511328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.074 [2024-07-26 13:38:34.511336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.075 [2024-07-26 13:38:34.511944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.075 [2024-07-26 13:38:34.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.511963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.511972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.511980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.511990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.511999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.076 [2024-07-26 13:38:34.512103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.076 [2024-07-26 13:38:34.512112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a830 is same with the state(5) to be set 00:27:37.340 [2024-07-26 13:38:34.513361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.340 [2024-07-26 13:38:34.513892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.340 [2024-07-26 13:38:34.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.513919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.513937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.513954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.513971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.513989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.513997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.341 [2024-07-26 13:38:34.514525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.341 [2024-07-26 13:38:34.514534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0480 is same with the state(5) to be set 00:27:37.341 [2024-07-26 13:38:34.516046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.341 [2024-07-26 13:38:34.516062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.341 [2024-07-26 13:38:34.516072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:37.341 [2024-07-26 13:38:34.516083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:37.341 [2024-07-26 13:38:34.516093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:37.341 [2024-07-26 13:38:34.516261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.341 [2024-07-26 13:38:34.516797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.516809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2eab0 with addr=10.0.0.2, port=4420 00:27:37.342 [2024-07-26 13:38:34.516817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eab0 is same with the state(5) to be set 00:27:37.342 [2024-07-26 13:38:34.516829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2d3a0 (9): Bad file descriptor 00:27:37.342 [2024-07-26 13:38:34.516885] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.342 [2024-07-26 13:38:34.516901] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.342 [2024-07-26 13:38:34.516970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:37.342 [2024-07-26 13:38:34.517476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.517995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.518008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1affb80 with addr=10.0.0.2, port=4420 00:27:37.342 [2024-07-26 13:38:34.518015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1affb80 is same with the state(5) to be set 00:27:37.342 [2024-07-26 13:38:34.518607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.519174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.519189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb8fe0 with addr=10.0.0.2, port=4420 00:27:37.342 [2024-07-26 13:38:34.519199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb8fe0 is same with the state(5) to be set 00:27:37.342 [2024-07-26 13:38:34.519707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.520398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.342 [2024-07-26 13:38:34.520437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a517d0 with addr=10.0.0.2, port=4420 00:27:37.342 [2024-07-26 13:38:34.520448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a517d0 is same with the state(5) to be set 00:27:37.342 [2024-07-26 13:38:34.520463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eab0 (9): Bad file descriptor 00:27:37.342 [2024-07-26 13:38:34.520473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:37.342 [2024-07-26 13:38:34.520486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:37.342 [2024-07-26 13:38:34.520495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:37.342 [2024-07-26 13:38:34.520517] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.342 [2024-07-26 13:38:34.521599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.521985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.521992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.342 [2024-07-26 13:38:34.522091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.342 [2024-07-26 13:38:34.522100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.343 [2024-07-26 13:38:34.522750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.343 [2024-07-26 13:38:34.522759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb5bc0 is same with the state(5) to be set 00:27:37.343 [2024-07-26 13:38:34.524226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.343 [2024-07-26 13:38:34.524251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:37.343 [2024-07-26 13:38:34.524261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:37.344 [2024-07-26 13:38:34.524270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.344 task offset: 24320 on job bdev=Nvme1n1 fails 00:27:37.344 00:27:37.344 Latency(us) 00:27:37.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme1n1 ended in about 0.62 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme1n1 : 0.62 265.67 16.60 103.67 0.00 171885.17 26214.40 202724.69 00:27:37.344 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme2n1 ended in about 0.65 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme2n1 : 0.65 318.52 19.91 98.01 0.00 150542.46 89128.96 138062.51 00:27:37.344 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme3n1 ended in about 0.66 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme3n1 : 0.66 317.35 19.83 97.64 0.00 149299.20 59419.31 152043.52 00:27:37.344 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme4n1 ended in about 0.66 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme4n1 : 0.66 316.18 19.76 97.29 0.00 147960.37 87818.24 127576.75 00:27:37.344 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme5n1 ended in about 0.66 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme5n1 : 0.66 569.48 35.59 96.93 0.00 90562.75 10649.60 100925.44 00:27:37.344 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme6n1 ended in about 0.64 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme6n1 : 0.64 393.02 24.56 100.21 0.00 120518.97 34297.17 108789.76 00:27:37.344 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme7n1 ended in about 0.65 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme7n1 : 0.65 272.85 17.05 78.62 0.00 166760.68 6362.45 138936.32 00:27:37.344 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme8n1 ended in about 0.64 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme8n1 : 0.64 392.23 24.51 40.47 0.00 130755.91 9557.33 115343.36 00:27:37.344 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme9n1 ended in about 0.67 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme9n1 : 0.67 245.34 15.33 95.74 0.00 168097.92 81701.55 169519.79 00:27:37.344 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.344 Job: Nvme10n1 ended in about 0.64 seconds with error 00:27:37.344 Verification LBA range: start 0x0 length 0x400 00:27:37.344 Nvme10n1 : 0.64 324.76 20.30 99.93 0.00 132520.66 24029.87 122333.87 00:27:37.344 =================================================================================================================== 00:27:37.344 Total : 3415.38 213.46 908.51 0.00 138162.77 6362.45 202724.69 00:27:37.344 [2024-07-26 13:38:34.548217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:37.344 [2024-07-26 13:38:34.548262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:37.344 [2024-07-26 13:38:34.548908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.549188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.549215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af5230 with addr=10.0.0.2, port=4420 00:27:37.344 [2024-07-26 13:38:34.549227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5230 is same with the state(5) to be set 00:27:37.344 [2024-07-26 13:38:34.549244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1affb80 (9): Bad file descriptor 00:27:37.344 [2024-07-26 13:38:34.549256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb8fe0 (9): Bad file descriptor 00:27:37.344 [2024-07-26 13:38:34.549267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a517d0 (9): Bad file descriptor 00:27:37.344 [2024-07-26 13:38:34.549275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:37.344 [2024-07-26 13:38:34.549282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:37.344 [2024-07-26 13:38:34.549290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:37.344 [2024-07-26 13:38:34.549416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.344 [2024-07-26 13:38:34.549823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.550462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.550502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af24d0 with addr=10.0.0.2, port=4420 00:27:37.344 [2024-07-26 13:38:34.550513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af24d0 is same with the state(5) to be set 00:27:37.344 [2024-07-26 13:38:34.551049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.551579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.551617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9f800 with addr=10.0.0.2, port=4420 00:27:37.344 [2024-07-26 13:38:34.551628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f800 is same with the state(5) to be set 00:27:37.344 [2024-07-26 13:38:34.552150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.552524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.552562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2eee0 with addr=10.0.0.2, port=4420 00:27:37.344 [2024-07-26 13:38:34.552574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eee0 is same with the state(5) to be set 00:27:37.344 [2024-07-26 13:38:34.553101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.553426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.344 [2024-07-26 13:38:34.553437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c352d0 with addr=10.0.0.2, port=4420 00:27:37.344 [2024-07-26 13:38:34.553445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c352d0 is same with the state(5) to be set 00:27:37.344 [2024-07-26 13:38:34.553459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af5230 (9): Bad file descriptor 00:27:37.344 [2024-07-26 13:38:34.553470] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:37.344 [2024-07-26 13:38:34.553476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:37.344 [2024-07-26 13:38:34.553485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:37.344 [2024-07-26 13:38:34.553501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:37.344 [2024-07-26 13:38:34.553508] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:37.344 [2024-07-26 13:38:34.553515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:37.344 [2024-07-26 13:38:34.553526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:37.344 [2024-07-26 13:38:34.553532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:37.344 [2024-07-26 13:38:34.553539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:37.344 [2024-07-26 13:38:34.553570] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.344 [2024-07-26 13:38:34.553582] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.344 [2024-07-26 13:38:34.553592] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.345 [2024-07-26 13:38:34.553609] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.345 [2024-07-26 13:38:34.553953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.553965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.553971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.553986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af24d0 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.553996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f800 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.554006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eee0 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.554015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c352d0 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.554023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.554030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.554037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:37.345 [2024-07-26 13:38:34.554088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:37.345 [2024-07-26 13:38:34.554102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:37.345 [2024-07-26 13:38:34.554110] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.554132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.554140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.554146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.345 [2024-07-26 13:38:34.554156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.554163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.554170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:37.345 [2024-07-26 13:38:34.554179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.554185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.554192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:37.345 [2024-07-26 13:38:34.554210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.554217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.554224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:37.345 [2024-07-26 13:38:34.554258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.554266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.554272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.554278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.554801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.345 [2024-07-26 13:38:34.555172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.345 [2024-07-26 13:38:34.555182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2d3a0 with addr=10.0.0.2, port=4420 00:27:37.345 [2024-07-26 13:38:34.555190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d3a0 is same with the state(5) to be set 00:27:37.345 [2024-07-26 13:38:34.555549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.345 [2024-07-26 13:38:34.556070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.345 [2024-07-26 13:38:34.556081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2eab0 with addr=10.0.0.2, port=4420 00:27:37.345 [2024-07-26 13:38:34.556088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2eab0 is same with the state(5) to be set 00:27:37.345 [2024-07-26 13:38:34.556117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2d3a0 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.556126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2eab0 (9): Bad file descriptor 00:27:37.345 [2024-07-26 13:38:34.556153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.556160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.556167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:37.345 [2024-07-26 13:38:34.556176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:37.345 [2024-07-26 13:38:34.556182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:37.345 [2024-07-26 13:38:34.556193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:37.345 [2024-07-26 13:38:34.556227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 [2024-07-26 13:38:34.556235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.345 13:38:34 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:37.345 13:38:34 -- target/shutdown.sh@138 -- # sleep 1 00:27:38.286 13:38:35 -- target/shutdown.sh@141 -- # kill -9 1094068 00:27:38.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1094068) - No such process 00:27:38.286 13:38:35 -- target/shutdown.sh@141 -- # true 00:27:38.286 13:38:35 -- target/shutdown.sh@143 -- # stoptarget 00:27:38.286 13:38:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:38.286 13:38:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:38.286 13:38:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.286 13:38:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:38.286 13:38:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:38.286 13:38:35 -- nvmf/common.sh@116 -- # sync 00:27:38.286 13:38:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:38.286 13:38:35 -- nvmf/common.sh@119 -- # set +e 00:27:38.286 13:38:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:38.286 13:38:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:38.286 rmmod nvme_tcp 00:27:38.547 rmmod nvme_fabrics 00:27:38.547 rmmod nvme_keyring 00:27:38.547 13:38:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:38.547 13:38:35 -- nvmf/common.sh@123 -- # set -e 00:27:38.547 13:38:35 -- nvmf/common.sh@124 -- # return 0 00:27:38.547 13:38:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:38.547 13:38:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:38.547 13:38:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:38.547 13:38:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:38.547 13:38:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.547 13:38:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:38.547 13:38:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.547 13:38:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.547 13:38:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.461 13:38:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:40.461 00:27:40.461 real 0m7.404s 00:27:40.461 user 0m17.240s 00:27:40.461 sys 0m1.247s 00:27:40.461 13:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.461 13:38:37 -- common/autotest_common.sh@10 -- # set +x 00:27:40.461 ************************************ 00:27:40.461 END TEST nvmf_shutdown_tc3 00:27:40.461 ************************************ 00:27:40.461 13:38:37 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:40.461 00:27:40.461 real 0m32.237s 00:27:40.461 user 1m16.947s 00:27:40.461 sys 0m9.090s 00:27:40.461 13:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.461 13:38:37 -- common/autotest_common.sh@10 -- # set +x 00:27:40.461 ************************************ 00:27:40.461 END TEST nvmf_shutdown 00:27:40.461 ************************************ 00:27:40.723 13:38:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:40.723 13:38:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:40.723 13:38:37 -- common/autotest_common.sh@10 -- # set +x 00:27:40.723 13:38:38 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:40.723 13:38:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:40.723 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:27:40.723 13:38:38 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:40.723 13:38:38 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:40.723 13:38:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:40.723 13:38:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.723 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:27:40.723 ************************************ 00:27:40.723 START TEST nvmf_multicontroller 00:27:40.723 ************************************ 00:27:40.723 13:38:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:40.723 * Looking for test storage... 00:27:40.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:40.723 13:38:38 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.723 13:38:38 -- nvmf/common.sh@7 -- # uname -s 00:27:40.723 13:38:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.723 13:38:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.723 13:38:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.723 13:38:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.723 13:38:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.723 13:38:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.723 13:38:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.723 13:38:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.723 13:38:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.723 13:38:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.723 13:38:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:40.723 13:38:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:40.723 13:38:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.723 13:38:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.723 13:38:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.723 13:38:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.723 13:38:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.723 13:38:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.723 13:38:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.723 13:38:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.723 13:38:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.723 13:38:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.723 13:38:38 -- paths/export.sh@5 -- # export PATH 00:27:40.723 13:38:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.723 13:38:38 -- nvmf/common.sh@46 -- # : 0 00:27:40.723 13:38:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:40.723 13:38:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:40.723 13:38:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:40.723 13:38:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.723 13:38:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.723 13:38:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:40.723 13:38:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:40.723 13:38:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:40.723 13:38:38 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.723 13:38:38 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:40.723 13:38:38 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:40.723 13:38:38 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:40.723 13:38:38 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:40.723 13:38:38 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:40.723 13:38:38 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:40.723 13:38:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:40.724 13:38:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.724 13:38:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:40.724 13:38:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:40.724 13:38:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:40.724 13:38:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.724 13:38:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.724 13:38:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.724 13:38:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:40.724 13:38:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:40.724 13:38:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:40.724 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:27:48.868 13:38:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:48.868 13:38:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:48.868 13:38:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:48.868 13:38:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:48.868 13:38:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:48.868 13:38:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:48.868 13:38:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:48.868 13:38:44 -- nvmf/common.sh@294 -- # net_devs=() 00:27:48.868 13:38:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:48.868 13:38:44 -- nvmf/common.sh@295 -- # e810=() 00:27:48.868 13:38:44 -- nvmf/common.sh@295 -- # local -ga e810 00:27:48.868 13:38:44 -- nvmf/common.sh@296 -- # x722=() 00:27:48.868 13:38:44 -- nvmf/common.sh@296 -- # local -ga x722 00:27:48.868 13:38:44 -- nvmf/common.sh@297 -- # mlx=() 00:27:48.868 13:38:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:48.868 13:38:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.868 13:38:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:48.868 13:38:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:48.868 13:38:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.868 13:38:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:48.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:48.868 13:38:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.868 13:38:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:48.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:48.868 13:38:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.868 13:38:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.868 13:38:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.868 13:38:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:48.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:48.868 13:38:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.868 13:38:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.868 13:38:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.868 13:38:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.868 13:38:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:48.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:48.868 13:38:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.868 13:38:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:48.868 13:38:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:48.868 13:38:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:48.868 13:38:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.868 13:38:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.868 13:38:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.868 13:38:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:48.868 13:38:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.868 13:38:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.868 13:38:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:48.868 13:38:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.868 13:38:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.868 13:38:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:48.868 13:38:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:48.868 13:38:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.868 13:38:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.868 13:38:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.868 13:38:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.868 13:38:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:48.868 13:38:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.868 13:38:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.868 13:38:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.868 13:38:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:48.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:27:48.868 00:27:48.868 --- 10.0.0.2 ping statistics --- 00:27:48.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.868 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:27:48.868 13:38:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:27:48.868 00:27:48.868 --- 10.0.0.1 ping statistics --- 00:27:48.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.868 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:27:48.868 13:38:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.868 13:38:45 -- nvmf/common.sh@410 -- # return 0 00:27:48.868 13:38:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:48.868 13:38:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.868 13:38:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:48.868 13:38:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:48.868 13:38:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.868 13:38:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:48.868 13:38:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:48.868 13:38:45 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:48.868 13:38:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:48.868 13:38:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:48.868 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:27:48.868 13:38:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:48.868 13:38:45 -- nvmf/common.sh@469 -- # nvmfpid=1098962 00:27:48.868 13:38:45 -- nvmf/common.sh@470 -- # waitforlisten 1098962 00:27:48.868 13:38:45 -- common/autotest_common.sh@819 -- # '[' -z 1098962 ']' 00:27:48.868 13:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.868 13:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.868 13:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.868 13:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.868 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:27:48.868 [2024-07-26 13:38:45.355487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:48.868 [2024-07-26 13:38:45.355549] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.868 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.868 [2024-07-26 13:38:45.424680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.868 [2024-07-26 13:38:45.454483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:48.868 [2024-07-26 13:38:45.454603] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.869 [2024-07-26 13:38:45.454611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.869 [2024-07-26 13:38:45.454619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.869 [2024-07-26 13:38:45.454752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.869 [2024-07-26 13:38:45.454907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.869 [2024-07-26 13:38:45.454908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.869 13:38:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.869 13:38:46 -- common/autotest_common.sh@852 -- # return 0 00:27:48.869 13:38:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:48.869 13:38:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.869 13:38:46 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 [2024-07-26 13:38:46.156306] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 Malloc0 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 [2024-07-26 13:38:46.204505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 [2024-07-26 13:38:46.212429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 Malloc1 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:48.869 13:38:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:48.869 13:38:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.869 13:38:46 -- host/multicontroller.sh@44 -- # bdevperf_pid=1099169 00:27:48.869 13:38:46 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.869 13:38:46 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:48.869 13:38:46 -- host/multicontroller.sh@47 -- # waitforlisten 1099169 /var/tmp/bdevperf.sock 00:27:48.869 13:38:46 -- common/autotest_common.sh@819 -- # '[' -z 1099169 ']' 00:27:48.869 13:38:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.869 13:38:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.869 13:38:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.869 13:38:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.869 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:49.852 13:38:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.852 13:38:47 -- common/autotest_common.sh@852 -- # return 0 00:27:49.852 13:38:47 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:49.852 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.852 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:49.852 NVMe0n1 00:27:49.852 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.852 13:38:47 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.852 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.852 13:38:47 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:49.852 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:49.852 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.852 1 00:27:50.113 13:38:47 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:50.113 13:38:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:50.113 13:38:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:50.113 13:38:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:50.113 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 request: 00:27:50.113 { 00:27:50.113 "name": "NVMe0", 00:27:50.113 "trtype": "tcp", 00:27:50.113 "traddr": "10.0.0.2", 00:27:50.113 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:50.113 "hostaddr": "10.0.0.2", 00:27:50.113 "hostsvcid": "60000", 00:27:50.113 "adrfam": "ipv4", 00:27:50.113 "trsvcid": "4420", 00:27:50.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.113 "method": "bdev_nvme_attach_controller", 00:27:50.113 "req_id": 1 00:27:50.113 } 00:27:50.113 Got JSON-RPC error response 00:27:50.113 response: 00:27:50.113 { 00:27:50.113 "code": -114, 00:27:50.113 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:50.113 } 00:27:50.113 13:38:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # es=1 00:27:50.113 13:38:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:50.113 13:38:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:50.113 13:38:47 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:50.113 13:38:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:50.113 13:38:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:50.113 13:38:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:50.113 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 request: 00:27:50.113 { 00:27:50.113 "name": "NVMe0", 00:27:50.113 "trtype": "tcp", 00:27:50.113 "traddr": "10.0.0.2", 00:27:50.113 "hostaddr": "10.0.0.2", 00:27:50.113 "hostsvcid": "60000", 00:27:50.113 "adrfam": "ipv4", 00:27:50.113 "trsvcid": "4420", 00:27:50.113 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.113 "method": "bdev_nvme_attach_controller", 00:27:50.113 "req_id": 1 00:27:50.113 } 00:27:50.113 Got JSON-RPC error response 00:27:50.113 response: 00:27:50.113 { 00:27:50.113 "code": -114, 00:27:50.113 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:50.113 } 00:27:50.113 13:38:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # es=1 00:27:50.113 13:38:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:50.113 13:38:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:50.113 13:38:47 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:50.113 13:38:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 request: 00:27:50.113 { 00:27:50.113 "name": "NVMe0", 00:27:50.113 "trtype": "tcp", 00:27:50.113 "traddr": "10.0.0.2", 00:27:50.113 "hostaddr": "10.0.0.2", 00:27:50.113 "hostsvcid": "60000", 00:27:50.113 "adrfam": "ipv4", 00:27:50.113 "trsvcid": "4420", 00:27:50.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.113 "multipath": "disable", 00:27:50.113 "method": "bdev_nvme_attach_controller", 00:27:50.113 "req_id": 1 00:27:50.113 } 00:27:50.113 Got JSON-RPC error response 00:27:50.113 response: 00:27:50.113 { 00:27:50.113 "code": -114, 00:27:50.113 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:50.113 } 00:27:50.113 13:38:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # es=1 00:27:50.113 13:38:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:50.113 13:38:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:50.113 13:38:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:50.113 13:38:47 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:50.113 13:38:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:50.113 13:38:47 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:50.113 13:38:47 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:50.113 13:38:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:50.113 13:38:47 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:50.113 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.113 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.113 request: 00:27:50.113 { 00:27:50.113 "name": "NVMe0", 00:27:50.113 "trtype": "tcp", 00:27:50.113 "traddr": "10.0.0.2", 00:27:50.113 "hostaddr": "10.0.0.2", 00:27:50.113 "hostsvcid": "60000", 00:27:50.113 "adrfam": "ipv4", 00:27:50.113 "trsvcid": "4420", 00:27:50.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.114 "multipath": "failover", 00:27:50.114 "method": "bdev_nvme_attach_controller", 00:27:50.114 "req_id": 1 00:27:50.114 } 00:27:50.114 Got JSON-RPC error response 00:27:50.114 response: 00:27:50.114 { 00:27:50.114 "code": -114, 00:27:50.114 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:50.114 } 00:27:50.114 13:38:47 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:50.114 13:38:47 -- common/autotest_common.sh@643 -- # es=1 00:27:50.114 13:38:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:50.114 13:38:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:50.114 13:38:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:50.114 13:38:47 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.114 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.114 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.114 00:27:50.114 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.114 13:38:47 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:50.114 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.114 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.114 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.114 13:38:47 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:50.114 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.114 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.374 00:27:50.374 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.374 13:38:47 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:50.374 13:38:47 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:50.374 13:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.374 13:38:47 -- common/autotest_common.sh@10 -- # set +x 00:27:50.374 13:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.374 13:38:47 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:50.374 13:38:47 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:51.316 0 00:27:51.316 13:38:48 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:51.316 13:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.316 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:27:51.316 13:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.316 13:38:48 -- host/multicontroller.sh@100 -- # killprocess 1099169 00:27:51.316 13:38:48 -- common/autotest_common.sh@926 -- # '[' -z 1099169 ']' 00:27:51.316 13:38:48 -- common/autotest_common.sh@930 -- # kill -0 1099169 00:27:51.578 13:38:48 -- common/autotest_common.sh@931 -- # uname 00:27:51.579 13:38:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.579 13:38:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1099169 00:27:51.579 13:38:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:51.579 13:38:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:51.579 13:38:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1099169' 00:27:51.579 killing process with pid 1099169 00:27:51.579 13:38:48 -- common/autotest_common.sh@945 -- # kill 1099169 00:27:51.579 13:38:48 -- common/autotest_common.sh@950 -- # wait 1099169 00:27:51.579 13:38:48 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.579 13:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.579 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:27:51.579 13:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.579 13:38:48 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:51.579 13:38:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.579 13:38:48 -- common/autotest_common.sh@10 -- # set +x 00:27:51.579 13:38:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.579 13:38:48 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:51.579 13:38:48 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:51.579 13:38:48 -- common/autotest_common.sh@1597 -- # read -r file 00:27:51.579 13:38:48 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:51.579 13:38:48 -- common/autotest_common.sh@1596 -- # sort -u 00:27:51.579 13:38:48 -- common/autotest_common.sh@1598 -- # cat 00:27:51.579 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:51.579 [2024-07-26 13:38:46.316624] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:51.579 [2024-07-26 13:38:46.316685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099169 ] 00:27:51.579 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.579 [2024-07-26 13:38:46.375608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.579 [2024-07-26 13:38:46.404485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.579 [2024-07-26 13:38:47.634836] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name e05de26a-b29f-457b-a88d-79c1b405ef0c already exists 00:27:51.579 [2024-07-26 13:38:47.634866] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:e05de26a-b29f-457b-a88d-79c1b405ef0c alias for bdev NVMe1n1 00:27:51.579 [2024-07-26 13:38:47.634877] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:51.579 Running I/O for 1 seconds... 00:27:51.579 00:27:51.579 Latency(us) 00:27:51.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.579 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:51.579 NVMe0n1 : 1.00 21948.17 85.74 0.00 0.00 5819.57 3467.95 22719.15 00:27:51.579 =================================================================================================================== 00:27:51.579 Total : 21948.17 85.74 0.00 0.00 5819.57 3467.95 22719.15 00:27:51.579 Received shutdown signal, test time was about 1.000000 seconds 00:27:51.579 00:27:51.579 Latency(us) 00:27:51.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.579 =================================================================================================================== 00:27:51.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.579 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:51.579 13:38:48 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:51.579 13:38:48 -- common/autotest_common.sh@1597 -- # read -r file 00:27:51.579 13:38:48 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:51.579 13:38:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:51.579 13:38:49 -- nvmf/common.sh@116 -- # sync 00:27:51.579 13:38:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:51.579 13:38:49 -- nvmf/common.sh@119 -- # set +e 00:27:51.579 13:38:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:51.579 13:38:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:51.579 rmmod nvme_tcp 00:27:51.579 rmmod nvme_fabrics 00:27:51.579 rmmod nvme_keyring 00:27:51.841 13:38:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:51.841 13:38:49 -- nvmf/common.sh@123 -- # set -e 00:27:51.841 13:38:49 -- nvmf/common.sh@124 -- # return 0 00:27:51.841 13:38:49 -- nvmf/common.sh@477 -- # '[' -n 1098962 ']' 00:27:51.841 13:38:49 -- nvmf/common.sh@478 -- # killprocess 1098962 00:27:51.841 13:38:49 -- common/autotest_common.sh@926 -- # '[' -z 1098962 ']' 00:27:51.841 13:38:49 -- common/autotest_common.sh@930 -- # kill -0 1098962 00:27:51.841 13:38:49 -- common/autotest_common.sh@931 -- # uname 00:27:51.841 13:38:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.841 13:38:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1098962 00:27:51.841 13:38:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:51.841 13:38:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:51.841 13:38:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1098962' 00:27:51.841 killing process with pid 1098962 00:27:51.841 13:38:49 -- common/autotest_common.sh@945 -- # kill 1098962 00:27:51.841 13:38:49 -- common/autotest_common.sh@950 -- # wait 1098962 00:27:51.841 13:38:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:51.841 13:38:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:51.841 13:38:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:51.841 13:38:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.841 13:38:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:51.841 13:38:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.841 13:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.841 13:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.391 13:38:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:54.391 00:27:54.391 real 0m13.305s 00:27:54.391 user 0m16.351s 00:27:54.391 sys 0m5.994s 00:27:54.391 13:38:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.391 13:38:51 -- common/autotest_common.sh@10 -- # set +x 00:27:54.391 ************************************ 00:27:54.391 END TEST nvmf_multicontroller 00:27:54.391 ************************************ 00:27:54.391 13:38:51 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:54.391 13:38:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:54.391 13:38:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.391 13:38:51 -- common/autotest_common.sh@10 -- # set +x 00:27:54.391 ************************************ 00:27:54.391 START TEST nvmf_aer 00:27:54.391 ************************************ 00:27:54.391 13:38:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:54.391 * Looking for test storage... 00:27:54.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.391 13:38:51 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.391 13:38:51 -- nvmf/common.sh@7 -- # uname -s 00:27:54.391 13:38:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.391 13:38:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.391 13:38:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.391 13:38:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.391 13:38:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.391 13:38:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.391 13:38:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.391 13:38:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.391 13:38:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.391 13:38:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.391 13:38:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.391 13:38:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:54.391 13:38:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.391 13:38:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.391 13:38:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.391 13:38:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.391 13:38:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.391 13:38:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.391 13:38:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.391 13:38:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.391 13:38:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.391 13:38:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.391 13:38:51 -- paths/export.sh@5 -- # export PATH 00:27:54.392 13:38:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.392 13:38:51 -- nvmf/common.sh@46 -- # : 0 00:27:54.392 13:38:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:54.392 13:38:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:54.392 13:38:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:54.392 13:38:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.392 13:38:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.392 13:38:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:54.392 13:38:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:54.392 13:38:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:54.392 13:38:51 -- host/aer.sh@11 -- # nvmftestinit 00:27:54.392 13:38:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:54.392 13:38:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.392 13:38:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:54.392 13:38:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:54.392 13:38:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:54.392 13:38:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.392 13:38:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.392 13:38:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.392 13:38:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:54.392 13:38:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:54.392 13:38:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:54.392 13:38:51 -- common/autotest_common.sh@10 -- # set +x 00:28:00.988 13:38:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:00.988 13:38:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:00.988 13:38:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:00.988 13:38:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:00.988 13:38:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:00.988 13:38:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:00.988 13:38:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:00.988 13:38:58 -- nvmf/common.sh@294 -- # net_devs=() 00:28:00.988 13:38:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:00.988 13:38:58 -- nvmf/common.sh@295 -- # e810=() 00:28:00.988 13:38:58 -- nvmf/common.sh@295 -- # local -ga e810 00:28:00.988 13:38:58 -- nvmf/common.sh@296 -- # x722=() 00:28:00.988 13:38:58 -- nvmf/common.sh@296 -- # local -ga x722 00:28:00.988 13:38:58 -- nvmf/common.sh@297 -- # mlx=() 00:28:00.988 13:38:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:00.988 13:38:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.988 13:38:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:00.988 13:38:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:00.988 13:38:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:00.988 13:38:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.988 13:38:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:00.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:00.988 13:38:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:00.988 13:38:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:00.988 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:00.988 13:38:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:00.988 13:38:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:00.988 13:38:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.988 13:38:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.988 13:38:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.988 13:38:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.988 13:38:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:00.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:00.988 13:38:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.988 13:38:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:00.988 13:38:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.988 13:38:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:00.988 13:38:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.988 13:38:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:00.988 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:00.988 13:38:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.989 13:38:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:00.989 13:38:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:00.989 13:38:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:00.989 13:38:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:00.989 13:38:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:00.989 13:38:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.989 13:38:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.989 13:38:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.989 13:38:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:00.989 13:38:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.989 13:38:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.989 13:38:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:00.989 13:38:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.989 13:38:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.989 13:38:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:00.989 13:38:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:00.989 13:38:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.989 13:38:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.989 13:38:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.989 13:38:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.989 13:38:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:00.989 13:38:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.344 13:38:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.344 13:38:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.344 13:38:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:01.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:28:01.344 00:28:01.344 --- 10.0.0.2 ping statistics --- 00:28:01.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.344 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:28:01.344 13:38:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:28:01.344 00:28:01.344 --- 10.0.0.1 ping statistics --- 00:28:01.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.344 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:28:01.344 13:38:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.344 13:38:58 -- nvmf/common.sh@410 -- # return 0 00:28:01.344 13:38:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:01.344 13:38:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.344 13:38:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:01.344 13:38:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:01.344 13:38:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.344 13:38:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:01.344 13:38:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:01.344 13:38:58 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:01.344 13:38:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:01.344 13:38:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:01.344 13:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:01.344 13:38:58 -- nvmf/common.sh@469 -- # nvmfpid=1103871 00:28:01.344 13:38:58 -- nvmf/common.sh@470 -- # waitforlisten 1103871 00:28:01.344 13:38:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:01.344 13:38:58 -- common/autotest_common.sh@819 -- # '[' -z 1103871 ']' 00:28:01.344 13:38:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.344 13:38:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:01.344 13:38:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.344 13:38:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:01.344 13:38:58 -- common/autotest_common.sh@10 -- # set +x 00:28:01.344 [2024-07-26 13:38:58.707008] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:01.344 [2024-07-26 13:38:58.707073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.344 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.344 [2024-07-26 13:38:58.777939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.344 [2024-07-26 13:38:58.816084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:01.344 [2024-07-26 13:38:58.816228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.344 [2024-07-26 13:38:58.816238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.344 [2024-07-26 13:38:58.816246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.344 [2024-07-26 13:38:58.816326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.344 [2024-07-26 13:38:58.816448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.344 [2024-07-26 13:38:58.816586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.344 [2024-07-26 13:38:58.816587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.287 13:38:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:02.287 13:38:59 -- common/autotest_common.sh@852 -- # return 0 00:28:02.287 13:38:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:02.287 13:38:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:02.287 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 13:38:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.287 13:38:59 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.287 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.287 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 [2024-07-26 13:38:59.532542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.287 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.287 13:38:59 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:02.287 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.287 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 Malloc0 00:28:02.287 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.287 13:38:59 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:02.287 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.287 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.287 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.288 13:38:59 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.288 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.288 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.288 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.288 13:38:59 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.288 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.288 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.288 [2024-07-26 13:38:59.591934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.288 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.288 13:38:59 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:02.288 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.288 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.288 [2024-07-26 13:38:59.603735] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:02.288 [ 00:28:02.288 { 00:28:02.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.288 "subtype": "Discovery", 00:28:02.288 "listen_addresses": [], 00:28:02.288 "allow_any_host": true, 00:28:02.288 "hosts": [] 00:28:02.288 }, 00:28:02.288 { 00:28:02.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.288 "subtype": "NVMe", 00:28:02.288 "listen_addresses": [ 00:28:02.288 { 00:28:02.288 "transport": "TCP", 00:28:02.288 "trtype": "TCP", 00:28:02.288 "adrfam": "IPv4", 00:28:02.288 "traddr": "10.0.0.2", 00:28:02.288 "trsvcid": "4420" 00:28:02.288 } 00:28:02.288 ], 00:28:02.288 "allow_any_host": true, 00:28:02.288 "hosts": [], 00:28:02.288 "serial_number": "SPDK00000000000001", 00:28:02.288 "model_number": "SPDK bdev Controller", 00:28:02.288 "max_namespaces": 2, 00:28:02.288 "min_cntlid": 1, 00:28:02.288 "max_cntlid": 65519, 00:28:02.288 "namespaces": [ 00:28:02.288 { 00:28:02.288 "nsid": 1, 00:28:02.288 "bdev_name": "Malloc0", 00:28:02.288 "name": "Malloc0", 00:28:02.288 "nguid": "F5DB1C248AED432BA40BA949EB36ABB2", 00:28:02.288 "uuid": "f5db1c24-8aed-432b-a40b-a949eb36abb2" 00:28:02.288 } 00:28:02.288 ] 00:28:02.288 } 00:28:02.288 ] 00:28:02.288 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.288 13:38:59 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:02.288 13:38:59 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:02.288 13:38:59 -- host/aer.sh@33 -- # aerpid=1104103 00:28:02.288 13:38:59 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:02.288 13:38:59 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:02.288 13:38:59 -- common/autotest_common.sh@1244 -- # local i=0 00:28:02.288 13:38:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.288 13:38:59 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:28:02.288 13:38:59 -- common/autotest_common.sh@1247 -- # i=1 00:28:02.288 13:38:59 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:02.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.288 13:38:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.288 13:38:59 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:28:02.288 13:38:59 -- common/autotest_common.sh@1247 -- # i=2 00:28:02.288 13:38:59 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:02.550 13:38:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.550 13:38:59 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:28:02.550 13:38:59 -- common/autotest_common.sh@1247 -- # i=3 00:28:02.550 13:38:59 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:02.550 13:38:59 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.550 13:38:59 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:02.550 13:38:59 -- common/autotest_common.sh@1255 -- # return 0 00:28:02.550 13:38:59 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:02.550 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.550 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.550 Malloc1 00:28:02.550 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.550 13:38:59 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:02.550 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.550 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.550 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.550 13:38:59 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:02.550 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.550 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.550 [ 00:28:02.550 { 00:28:02.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.550 "subtype": "Discovery", 00:28:02.550 "listen_addresses": [], 00:28:02.550 "allow_any_host": true, 00:28:02.550 "hosts": [] 00:28:02.550 }, 00:28:02.550 { 00:28:02.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.550 "subtype": "NVMe", 00:28:02.550 "listen_addresses": [ 00:28:02.550 { 00:28:02.550 "transport": "TCP", 00:28:02.550 "trtype": "TCP", 00:28:02.550 "adrfam": "IPv4", 00:28:02.550 "traddr": "10.0.0.2", 00:28:02.550 "trsvcid": "4420" 00:28:02.550 } 00:28:02.550 ], 00:28:02.550 "allow_any_host": true, 00:28:02.550 "hosts": [], 00:28:02.550 "serial_number": "SPDK00000000000001", 00:28:02.550 "model_number": "SPDK bdev Controller", 00:28:02.550 "max_namespaces": 2, 00:28:02.550 "min_cntlid": 1, 00:28:02.550 "max_cntlid": 65519, 00:28:02.550 "namespaces": [ 00:28:02.550 { 00:28:02.550 "nsid": 1, 00:28:02.550 "bdev_name": "Malloc0", 00:28:02.550 "name": "Malloc0", 00:28:02.550 "nguid": "F5DB1C248AED432BA40BA949EB36ABB2", 00:28:02.550 Asynchronous Event Request test 00:28:02.550 Attaching to 10.0.0.2 00:28:02.550 Attached to 10.0.0.2 00:28:02.550 Registering asynchronous event callbacks... 00:28:02.550 Starting namespace attribute notice tests for all controllers... 00:28:02.550 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:02.550 aer_cb - Changed Namespace 00:28:02.550 Cleaning up... 00:28:02.550 "uuid": "f5db1c24-8aed-432b-a40b-a949eb36abb2" 00:28:02.550 }, 00:28:02.550 { 00:28:02.550 "nsid": 2, 00:28:02.550 "bdev_name": "Malloc1", 00:28:02.550 "name": "Malloc1", 00:28:02.550 "nguid": "9A3C9162D5124D9086280D54C89C021C", 00:28:02.550 "uuid": "9a3c9162-d512-4d90-8628-0d54c89c021c" 00:28:02.550 } 00:28:02.550 ] 00:28:02.550 } 00:28:02.550 ] 00:28:02.550 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.550 13:38:59 -- host/aer.sh@43 -- # wait 1104103 00:28:02.550 13:38:59 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:02.550 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.550 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:28:02.550 13:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.550 13:39:00 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:02.550 13:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.550 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:28:02.812 13:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.812 13:39:00 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.812 13:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:02.812 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:28:02.812 13:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:02.812 13:39:00 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:02.812 13:39:00 -- host/aer.sh@51 -- # nvmftestfini 00:28:02.812 13:39:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:02.812 13:39:00 -- nvmf/common.sh@116 -- # sync 00:28:02.812 13:39:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:02.812 13:39:00 -- nvmf/common.sh@119 -- # set +e 00:28:02.812 13:39:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:02.812 13:39:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:02.812 rmmod nvme_tcp 00:28:02.812 rmmod nvme_fabrics 00:28:02.812 rmmod nvme_keyring 00:28:02.812 13:39:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:02.812 13:39:00 -- nvmf/common.sh@123 -- # set -e 00:28:02.812 13:39:00 -- nvmf/common.sh@124 -- # return 0 00:28:02.812 13:39:00 -- nvmf/common.sh@477 -- # '[' -n 1103871 ']' 00:28:02.812 13:39:00 -- nvmf/common.sh@478 -- # killprocess 1103871 00:28:02.812 13:39:00 -- common/autotest_common.sh@926 -- # '[' -z 1103871 ']' 00:28:02.812 13:39:00 -- common/autotest_common.sh@930 -- # kill -0 1103871 00:28:02.812 13:39:00 -- common/autotest_common.sh@931 -- # uname 00:28:02.812 13:39:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:02.812 13:39:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1103871 00:28:02.812 13:39:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:02.812 13:39:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:02.812 13:39:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1103871' 00:28:02.812 killing process with pid 1103871 00:28:02.812 13:39:00 -- common/autotest_common.sh@945 -- # kill 1103871 00:28:02.812 [2024-07-26 13:39:00.173668] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:02.812 13:39:00 -- common/autotest_common.sh@950 -- # wait 1103871 00:28:03.074 13:39:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:03.074 13:39:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:03.074 13:39:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:03.074 13:39:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.074 13:39:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:03.074 13:39:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.074 13:39:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.074 13:39:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.992 13:39:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:04.992 00:28:04.992 real 0m10.997s 00:28:04.992 user 0m7.961s 00:28:04.992 sys 0m5.750s 00:28:04.992 13:39:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.992 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:28:04.992 ************************************ 00:28:04.993 END TEST nvmf_aer 00:28:04.993 ************************************ 00:28:04.993 13:39:02 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:04.993 13:39:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:04.993 13:39:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.993 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:28:04.993 ************************************ 00:28:04.993 START TEST nvmf_async_init 00:28:04.993 ************************************ 00:28:04.993 13:39:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:05.255 * Looking for test storage... 00:28:05.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:05.255 13:39:02 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.255 13:39:02 -- nvmf/common.sh@7 -- # uname -s 00:28:05.255 13:39:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.255 13:39:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.255 13:39:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.255 13:39:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.255 13:39:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.255 13:39:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.255 13:39:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.255 13:39:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.255 13:39:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.255 13:39:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.255 13:39:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:05.255 13:39:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:05.255 13:39:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.255 13:39:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.255 13:39:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.255 13:39:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.255 13:39:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.255 13:39:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.255 13:39:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.255 13:39:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.255 13:39:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.255 13:39:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.255 13:39:02 -- paths/export.sh@5 -- # export PATH 00:28:05.255 13:39:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.255 13:39:02 -- nvmf/common.sh@46 -- # : 0 00:28:05.255 13:39:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:05.255 13:39:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:05.255 13:39:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:05.255 13:39:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.255 13:39:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.255 13:39:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:05.255 13:39:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:05.255 13:39:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:05.255 13:39:02 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:05.255 13:39:02 -- host/async_init.sh@14 -- # null_block_size=512 00:28:05.255 13:39:02 -- host/async_init.sh@15 -- # null_bdev=null0 00:28:05.255 13:39:02 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:05.255 13:39:02 -- host/async_init.sh@20 -- # uuidgen 00:28:05.255 13:39:02 -- host/async_init.sh@20 -- # tr -d - 00:28:05.255 13:39:02 -- host/async_init.sh@20 -- # nguid=19d6b59d417a4c1195055993b17d5bd7 00:28:05.255 13:39:02 -- host/async_init.sh@22 -- # nvmftestinit 00:28:05.255 13:39:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:05.255 13:39:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.255 13:39:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:05.255 13:39:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:05.255 13:39:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:05.255 13:39:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.255 13:39:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.255 13:39:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.255 13:39:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:05.255 13:39:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:05.255 13:39:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:05.255 13:39:02 -- common/autotest_common.sh@10 -- # set +x 00:28:13.403 13:39:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:13.403 13:39:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:13.403 13:39:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:13.403 13:39:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:13.403 13:39:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:13.403 13:39:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:13.403 13:39:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:13.403 13:39:09 -- nvmf/common.sh@294 -- # net_devs=() 00:28:13.403 13:39:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:13.403 13:39:09 -- nvmf/common.sh@295 -- # e810=() 00:28:13.403 13:39:09 -- nvmf/common.sh@295 -- # local -ga e810 00:28:13.403 13:39:09 -- nvmf/common.sh@296 -- # x722=() 00:28:13.403 13:39:09 -- nvmf/common.sh@296 -- # local -ga x722 00:28:13.403 13:39:09 -- nvmf/common.sh@297 -- # mlx=() 00:28:13.403 13:39:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:13.403 13:39:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.403 13:39:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:13.403 13:39:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:13.403 13:39:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:13.403 13:39:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:13.403 13:39:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:13.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:13.403 13:39:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:13.403 13:39:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:13.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:13.403 13:39:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:13.403 13:39:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:13.403 13:39:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:13.404 13:39:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.404 13:39:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:13.404 13:39:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.404 13:39:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:13.404 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:13.404 13:39:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.404 13:39:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:13.404 13:39:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.404 13:39:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:13.404 13:39:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.404 13:39:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:13.404 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:13.404 13:39:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.404 13:39:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:13.404 13:39:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:13.404 13:39:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:13.404 13:39:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.404 13:39:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.404 13:39:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.404 13:39:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:13.404 13:39:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.404 13:39:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.404 13:39:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:13.404 13:39:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.404 13:39:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.404 13:39:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:13.404 13:39:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:13.404 13:39:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.404 13:39:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.404 13:39:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.404 13:39:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.404 13:39:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:13.404 13:39:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.404 13:39:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.404 13:39:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.404 13:39:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:13.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:28:13.404 00:28:13.404 --- 10.0.0.2 ping statistics --- 00:28:13.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.404 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:28:13.404 13:39:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:28:13.404 00:28:13.404 --- 10.0.0.1 ping statistics --- 00:28:13.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.404 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:28:13.404 13:39:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.404 13:39:09 -- nvmf/common.sh@410 -- # return 0 00:28:13.404 13:39:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:13.404 13:39:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.404 13:39:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:13.404 13:39:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.404 13:39:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:13.404 13:39:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:13.404 13:39:09 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:13.404 13:39:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:13.404 13:39:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:13.404 13:39:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 13:39:09 -- nvmf/common.sh@469 -- # nvmfpid=1108585 00:28:13.404 13:39:09 -- nvmf/common.sh@470 -- # waitforlisten 1108585 00:28:13.404 13:39:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:13.404 13:39:09 -- common/autotest_common.sh@819 -- # '[' -z 1108585 ']' 00:28:13.404 13:39:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.404 13:39:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.404 13:39:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.404 13:39:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.404 13:39:09 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 [2024-07-26 13:39:09.802771] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:13.404 [2024-07-26 13:39:09.802820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.404 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.404 [2024-07-26 13:39:09.869120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.404 [2024-07-26 13:39:09.897739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:13.404 [2024-07-26 13:39:09.897858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.404 [2024-07-26 13:39:09.897867] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.404 [2024-07-26 13:39:09.897874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.404 [2024-07-26 13:39:09.897893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.404 13:39:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:13.404 13:39:10 -- common/autotest_common.sh@852 -- # return 0 00:28:13.404 13:39:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:13.404 13:39:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 13:39:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.404 13:39:10 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 [2024-07-26 13:39:10.658523] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 null0 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 19d6b59d417a4c1195055993b17d5bd7 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.404 [2024-07-26 13:39:10.698767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.404 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.404 13:39:10 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:13.404 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.404 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.666 nvme0n1 00:28:13.666 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.666 13:39:10 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:13.666 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.666 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.666 [ 00:28:13.666 { 00:28:13.666 "name": "nvme0n1", 00:28:13.666 "aliases": [ 00:28:13.666 "19d6b59d-417a-4c11-9505-5993b17d5bd7" 00:28:13.666 ], 00:28:13.666 "product_name": "NVMe disk", 00:28:13.666 "block_size": 512, 00:28:13.666 "num_blocks": 2097152, 00:28:13.666 "uuid": "19d6b59d-417a-4c11-9505-5993b17d5bd7", 00:28:13.666 "assigned_rate_limits": { 00:28:13.666 "rw_ios_per_sec": 0, 00:28:13.666 "rw_mbytes_per_sec": 0, 00:28:13.666 "r_mbytes_per_sec": 0, 00:28:13.666 "w_mbytes_per_sec": 0 00:28:13.666 }, 00:28:13.666 "claimed": false, 00:28:13.666 "zoned": false, 00:28:13.666 "supported_io_types": { 00:28:13.666 "read": true, 00:28:13.666 "write": true, 00:28:13.666 "unmap": false, 00:28:13.666 "write_zeroes": true, 00:28:13.666 "flush": true, 00:28:13.666 "reset": true, 00:28:13.666 "compare": true, 00:28:13.666 "compare_and_write": true, 00:28:13.666 "abort": true, 00:28:13.666 "nvme_admin": true, 00:28:13.666 "nvme_io": true 00:28:13.666 }, 00:28:13.666 "driver_specific": { 00:28:13.666 "nvme": [ 00:28:13.666 { 00:28:13.666 "trid": { 00:28:13.666 "trtype": "TCP", 00:28:13.666 "adrfam": "IPv4", 00:28:13.666 "traddr": "10.0.0.2", 00:28:13.666 "trsvcid": "4420", 00:28:13.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:13.666 }, 00:28:13.666 "ctrlr_data": { 00:28:13.666 "cntlid": 1, 00:28:13.666 "vendor_id": "0x8086", 00:28:13.666 "model_number": "SPDK bdev Controller", 00:28:13.666 "serial_number": "00000000000000000000", 00:28:13.666 "firmware_revision": "24.01.1", 00:28:13.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.666 "oacs": { 00:28:13.666 "security": 0, 00:28:13.666 "format": 0, 00:28:13.666 "firmware": 0, 00:28:13.666 "ns_manage": 0 00:28:13.666 }, 00:28:13.666 "multi_ctrlr": true, 00:28:13.666 "ana_reporting": false 00:28:13.666 }, 00:28:13.666 "vs": { 00:28:13.666 "nvme_version": "1.3" 00:28:13.666 }, 00:28:13.666 "ns_data": { 00:28:13.667 "id": 1, 00:28:13.667 "can_share": true 00:28:13.667 } 00:28:13.667 } 00:28:13.667 ], 00:28:13.667 "mp_policy": "active_passive" 00:28:13.667 } 00:28:13.667 } 00:28:13.667 ] 00:28:13.667 13:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:10 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:13.667 13:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:10 -- common/autotest_common.sh@10 -- # set +x 00:28:13.667 [2024-07-26 13:39:10.947240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:13.667 [2024-07-26 13:39:10.947302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1640 (9): Bad file descriptor 00:28:13.667 [2024-07-26 13:39:11.079295] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.667 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:11 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:13.667 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.667 [ 00:28:13.667 { 00:28:13.667 "name": "nvme0n1", 00:28:13.667 "aliases": [ 00:28:13.667 "19d6b59d-417a-4c11-9505-5993b17d5bd7" 00:28:13.667 ], 00:28:13.667 "product_name": "NVMe disk", 00:28:13.667 "block_size": 512, 00:28:13.667 "num_blocks": 2097152, 00:28:13.667 "uuid": "19d6b59d-417a-4c11-9505-5993b17d5bd7", 00:28:13.667 "assigned_rate_limits": { 00:28:13.667 "rw_ios_per_sec": 0, 00:28:13.667 "rw_mbytes_per_sec": 0, 00:28:13.667 "r_mbytes_per_sec": 0, 00:28:13.667 "w_mbytes_per_sec": 0 00:28:13.667 }, 00:28:13.667 "claimed": false, 00:28:13.667 "zoned": false, 00:28:13.667 "supported_io_types": { 00:28:13.667 "read": true, 00:28:13.667 "write": true, 00:28:13.667 "unmap": false, 00:28:13.667 "write_zeroes": true, 00:28:13.667 "flush": true, 00:28:13.667 "reset": true, 00:28:13.667 "compare": true, 00:28:13.667 "compare_and_write": true, 00:28:13.667 "abort": true, 00:28:13.667 "nvme_admin": true, 00:28:13.667 "nvme_io": true 00:28:13.667 }, 00:28:13.667 "driver_specific": { 00:28:13.667 "nvme": [ 00:28:13.667 { 00:28:13.667 "trid": { 00:28:13.667 "trtype": "TCP", 00:28:13.667 "adrfam": "IPv4", 00:28:13.667 "traddr": "10.0.0.2", 00:28:13.667 "trsvcid": "4420", 00:28:13.667 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:13.667 }, 00:28:13.667 "ctrlr_data": { 00:28:13.667 "cntlid": 2, 00:28:13.667 "vendor_id": "0x8086", 00:28:13.667 "model_number": "SPDK bdev Controller", 00:28:13.667 "serial_number": "00000000000000000000", 00:28:13.667 "firmware_revision": "24.01.1", 00:28:13.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.667 "oacs": { 00:28:13.667 "security": 0, 00:28:13.667 "format": 0, 00:28:13.667 "firmware": 0, 00:28:13.667 "ns_manage": 0 00:28:13.667 }, 00:28:13.667 "multi_ctrlr": true, 00:28:13.667 "ana_reporting": false 00:28:13.667 }, 00:28:13.667 "vs": { 00:28:13.667 "nvme_version": "1.3" 00:28:13.667 }, 00:28:13.667 "ns_data": { 00:28:13.667 "id": 1, 00:28:13.667 "can_share": true 00:28:13.667 } 00:28:13.667 } 00:28:13.667 ], 00:28:13.667 "mp_policy": "active_passive" 00:28:13.667 } 00:28:13.667 } 00:28:13.667 ] 00:28:13.667 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:11 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.667 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.667 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:11 -- host/async_init.sh@53 -- # mktemp 00:28:13.667 13:39:11 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mPjdrZVmGK 00:28:13.667 13:39:11 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:13.667 13:39:11 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mPjdrZVmGK 00:28:13.667 13:39:11 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.667 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:11 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:13.667 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.667 [2024-07-26 13:39:11.131867] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:13.667 [2024-07-26 13:39:11.131991] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.667 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.667 13:39:11 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mPjdrZVmGK 00:28:13.667 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.667 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.929 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.929 13:39:11 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mPjdrZVmGK 00:28:13.929 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.929 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.929 [2024-07-26 13:39:11.147908] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:13.929 nvme0n1 00:28:13.929 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.929 13:39:11 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:13.929 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.929 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.929 [ 00:28:13.929 { 00:28:13.929 "name": "nvme0n1", 00:28:13.929 "aliases": [ 00:28:13.929 "19d6b59d-417a-4c11-9505-5993b17d5bd7" 00:28:13.929 ], 00:28:13.929 "product_name": "NVMe disk", 00:28:13.929 "block_size": 512, 00:28:13.929 "num_blocks": 2097152, 00:28:13.929 "uuid": "19d6b59d-417a-4c11-9505-5993b17d5bd7", 00:28:13.929 "assigned_rate_limits": { 00:28:13.929 "rw_ios_per_sec": 0, 00:28:13.929 "rw_mbytes_per_sec": 0, 00:28:13.929 "r_mbytes_per_sec": 0, 00:28:13.929 "w_mbytes_per_sec": 0 00:28:13.929 }, 00:28:13.929 "claimed": false, 00:28:13.929 "zoned": false, 00:28:13.929 "supported_io_types": { 00:28:13.929 "read": true, 00:28:13.929 "write": true, 00:28:13.929 "unmap": false, 00:28:13.929 "write_zeroes": true, 00:28:13.929 "flush": true, 00:28:13.929 "reset": true, 00:28:13.929 "compare": true, 00:28:13.929 "compare_and_write": true, 00:28:13.929 "abort": true, 00:28:13.929 "nvme_admin": true, 00:28:13.929 "nvme_io": true 00:28:13.929 }, 00:28:13.929 "driver_specific": { 00:28:13.929 "nvme": [ 00:28:13.929 { 00:28:13.929 "trid": { 00:28:13.929 "trtype": "TCP", 00:28:13.929 "adrfam": "IPv4", 00:28:13.929 "traddr": "10.0.0.2", 00:28:13.929 "trsvcid": "4421", 00:28:13.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:13.929 }, 00:28:13.929 "ctrlr_data": { 00:28:13.929 "cntlid": 3, 00:28:13.929 "vendor_id": "0x8086", 00:28:13.929 "model_number": "SPDK bdev Controller", 00:28:13.929 "serial_number": "00000000000000000000", 00:28:13.929 "firmware_revision": "24.01.1", 00:28:13.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.929 "oacs": { 00:28:13.929 "security": 0, 00:28:13.929 "format": 0, 00:28:13.929 "firmware": 0, 00:28:13.929 "ns_manage": 0 00:28:13.929 }, 00:28:13.929 "multi_ctrlr": true, 00:28:13.929 "ana_reporting": false 00:28:13.929 }, 00:28:13.929 "vs": { 00:28:13.929 "nvme_version": "1.3" 00:28:13.929 }, 00:28:13.929 "ns_data": { 00:28:13.929 "id": 1, 00:28:13.929 "can_share": true 00:28:13.929 } 00:28:13.929 } 00:28:13.929 ], 00:28:13.929 "mp_policy": "active_passive" 00:28:13.929 } 00:28:13.929 } 00:28:13.929 ] 00:28:13.929 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.929 13:39:11 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.929 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.929 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:28:13.929 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.929 13:39:11 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mPjdrZVmGK 00:28:13.929 13:39:11 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:13.929 13:39:11 -- host/async_init.sh@78 -- # nvmftestfini 00:28:13.929 13:39:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:13.929 13:39:11 -- nvmf/common.sh@116 -- # sync 00:28:13.929 13:39:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:13.929 13:39:11 -- nvmf/common.sh@119 -- # set +e 00:28:13.929 13:39:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:13.929 13:39:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:13.929 rmmod nvme_tcp 00:28:13.929 rmmod nvme_fabrics 00:28:13.929 rmmod nvme_keyring 00:28:13.929 13:39:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:13.929 13:39:11 -- nvmf/common.sh@123 -- # set -e 00:28:13.929 13:39:11 -- nvmf/common.sh@124 -- # return 0 00:28:13.929 13:39:11 -- nvmf/common.sh@477 -- # '[' -n 1108585 ']' 00:28:13.929 13:39:11 -- nvmf/common.sh@478 -- # killprocess 1108585 00:28:13.929 13:39:11 -- common/autotest_common.sh@926 -- # '[' -z 1108585 ']' 00:28:13.929 13:39:11 -- common/autotest_common.sh@930 -- # kill -0 1108585 00:28:13.929 13:39:11 -- common/autotest_common.sh@931 -- # uname 00:28:13.929 13:39:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:13.929 13:39:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1108585 00:28:13.929 13:39:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:13.929 13:39:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:13.929 13:39:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1108585' 00:28:13.929 killing process with pid 1108585 00:28:13.929 13:39:11 -- common/autotest_common.sh@945 -- # kill 1108585 00:28:13.929 13:39:11 -- common/autotest_common.sh@950 -- # wait 1108585 00:28:14.191 13:39:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:14.191 13:39:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:14.191 13:39:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:14.191 13:39:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.191 13:39:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:14.191 13:39:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.191 13:39:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.191 13:39:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.174 13:39:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:16.174 00:28:16.174 real 0m11.125s 00:28:16.174 user 0m3.937s 00:28:16.174 sys 0m5.608s 00:28:16.174 13:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.174 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:28:16.174 ************************************ 00:28:16.174 END TEST nvmf_async_init 00:28:16.174 ************************************ 00:28:16.174 13:39:13 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:16.174 13:39:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:16.174 13:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.174 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:28:16.174 ************************************ 00:28:16.174 START TEST dma 00:28:16.174 ************************************ 00:28:16.174 13:39:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:16.436 * Looking for test storage... 00:28:16.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.436 13:39:13 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.436 13:39:13 -- nvmf/common.sh@7 -- # uname -s 00:28:16.436 13:39:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.436 13:39:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.436 13:39:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.436 13:39:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.436 13:39:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.436 13:39:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.436 13:39:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.436 13:39:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.436 13:39:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.436 13:39:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.436 13:39:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:16.436 13:39:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:16.436 13:39:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.436 13:39:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.436 13:39:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.436 13:39:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.436 13:39:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.437 13:39:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.437 13:39:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.437 13:39:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@5 -- # export PATH 00:28:16.437 13:39:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- nvmf/common.sh@46 -- # : 0 00:28:16.437 13:39:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:16.437 13:39:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:16.437 13:39:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.437 13:39:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.437 13:39:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:16.437 13:39:13 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:16.437 13:39:13 -- host/dma.sh@13 -- # exit 0 00:28:16.437 00:28:16.437 real 0m0.127s 00:28:16.437 user 0m0.053s 00:28:16.437 sys 0m0.083s 00:28:16.437 13:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.437 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 ************************************ 00:28:16.437 END TEST dma 00:28:16.437 ************************************ 00:28:16.437 13:39:13 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:16.437 13:39:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:16.437 13:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.437 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 ************************************ 00:28:16.437 START TEST nvmf_identify 00:28:16.437 ************************************ 00:28:16.437 13:39:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:16.437 * Looking for test storage... 00:28:16.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.437 13:39:13 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.437 13:39:13 -- nvmf/common.sh@7 -- # uname -s 00:28:16.437 13:39:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.437 13:39:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.437 13:39:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.437 13:39:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.437 13:39:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.437 13:39:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.437 13:39:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.437 13:39:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.437 13:39:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.437 13:39:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.437 13:39:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:16.437 13:39:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:16.437 13:39:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.437 13:39:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.437 13:39:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.437 13:39:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.437 13:39:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.437 13:39:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.437 13:39:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.437 13:39:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- paths/export.sh@5 -- # export PATH 00:28:16.437 13:39:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.437 13:39:13 -- nvmf/common.sh@46 -- # : 0 00:28:16.437 13:39:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:16.437 13:39:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:16.437 13:39:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.437 13:39:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.437 13:39:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:16.437 13:39:13 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:16.437 13:39:13 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:16.437 13:39:13 -- host/identify.sh@14 -- # nvmftestinit 00:28:16.437 13:39:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:16.437 13:39:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.437 13:39:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:16.437 13:39:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:16.437 13:39:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:16.437 13:39:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.437 13:39:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.437 13:39:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.437 13:39:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:16.437 13:39:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:16.437 13:39:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:16.437 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:28:24.582 13:39:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:24.582 13:39:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:24.582 13:39:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:24.582 13:39:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:24.582 13:39:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:24.582 13:39:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:24.582 13:39:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:24.582 13:39:20 -- nvmf/common.sh@294 -- # net_devs=() 00:28:24.582 13:39:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:24.582 13:39:20 -- nvmf/common.sh@295 -- # e810=() 00:28:24.582 13:39:20 -- nvmf/common.sh@295 -- # local -ga e810 00:28:24.582 13:39:20 -- nvmf/common.sh@296 -- # x722=() 00:28:24.582 13:39:20 -- nvmf/common.sh@296 -- # local -ga x722 00:28:24.582 13:39:20 -- nvmf/common.sh@297 -- # mlx=() 00:28:24.582 13:39:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:24.582 13:39:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.582 13:39:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:24.582 13:39:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:24.583 13:39:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.583 13:39:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:24.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:24.583 13:39:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.583 13:39:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:24.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:24.583 13:39:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.583 13:39:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.583 13:39:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.583 13:39:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:24.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:24.583 13:39:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.583 13:39:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.583 13:39:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.583 13:39:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.583 13:39:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:24.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:24.583 13:39:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.583 13:39:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:24.583 13:39:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:24.583 13:39:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:24.583 13:39:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.583 13:39:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.583 13:39:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.583 13:39:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:24.583 13:39:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.583 13:39:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.583 13:39:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:24.583 13:39:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.583 13:39:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.583 13:39:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:24.583 13:39:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:24.583 13:39:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.583 13:39:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.583 13:39:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.583 13:39:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.583 13:39:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:24.583 13:39:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.583 13:39:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.583 13:39:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.583 13:39:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:24.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:28:24.583 00:28:24.583 --- 10.0.0.2 ping statistics --- 00:28:24.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.583 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:28:24.583 13:39:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:28:24.583 00:28:24.583 --- 10.0.0.1 ping statistics --- 00:28:24.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.583 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:28:24.583 13:39:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.583 13:39:21 -- nvmf/common.sh@410 -- # return 0 00:28:24.583 13:39:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:24.583 13:39:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.583 13:39:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:24.583 13:39:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:24.583 13:39:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.583 13:39:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:24.583 13:39:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:24.583 13:39:21 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:24.583 13:39:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 13:39:21 -- host/identify.sh@19 -- # nvmfpid=1113321 00:28:24.583 13:39:21 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:24.583 13:39:21 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:24.583 13:39:21 -- host/identify.sh@23 -- # waitforlisten 1113321 00:28:24.583 13:39:21 -- common/autotest_common.sh@819 -- # '[' -z 1113321 ']' 00:28:24.583 13:39:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.583 13:39:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.583 13:39:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.583 13:39:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 [2024-07-26 13:39:21.114391] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.583 [2024-07-26 13:39:21.114459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.583 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.583 [2024-07-26 13:39:21.186038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.583 [2024-07-26 13:39:21.225390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.583 [2024-07-26 13:39:21.225535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.583 [2024-07-26 13:39:21.225545] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.583 [2024-07-26 13:39:21.225553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.583 [2024-07-26 13:39:21.225700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.583 [2024-07-26 13:39:21.225827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.583 [2024-07-26 13:39:21.225992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.583 [2024-07-26 13:39:21.225992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.583 13:39:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:24.583 13:39:21 -- common/autotest_common.sh@852 -- # return 0 00:28:24.583 13:39:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.583 13:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 [2024-07-26 13:39:21.896386] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.583 13:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.583 13:39:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:24.583 13:39:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 13:39:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.583 13:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 Malloc0 00:28:24.583 13:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.583 13:39:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.583 13:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 13:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.583 13:39:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:24.583 13:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.583 13:39:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.583 13:39:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.583 13:39:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.583 13:39:21 -- common/autotest_common.sh@10 -- # set +x 00:28:24.584 [2024-07-26 13:39:21.995895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.584 13:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.584 13:39:22 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.584 13:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.584 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:28:24.584 13:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.584 13:39:22 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:24.584 13:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.584 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:28:24.584 [2024-07-26 13:39:22.019753] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:24.584 [ 00:28:24.584 { 00:28:24.584 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:24.584 "subtype": "Discovery", 00:28:24.584 "listen_addresses": [ 00:28:24.584 { 00:28:24.584 "transport": "TCP", 00:28:24.584 "trtype": "TCP", 00:28:24.584 "adrfam": "IPv4", 00:28:24.584 "traddr": "10.0.0.2", 00:28:24.584 "trsvcid": "4420" 00:28:24.584 } 00:28:24.584 ], 00:28:24.584 "allow_any_host": true, 00:28:24.584 "hosts": [] 00:28:24.584 }, 00:28:24.584 { 00:28:24.584 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.584 "subtype": "NVMe", 00:28:24.584 "listen_addresses": [ 00:28:24.584 { 00:28:24.584 "transport": "TCP", 00:28:24.584 "trtype": "TCP", 00:28:24.584 "adrfam": "IPv4", 00:28:24.584 "traddr": "10.0.0.2", 00:28:24.584 "trsvcid": "4420" 00:28:24.584 } 00:28:24.584 ], 00:28:24.584 "allow_any_host": true, 00:28:24.584 "hosts": [], 00:28:24.584 "serial_number": "SPDK00000000000001", 00:28:24.584 "model_number": "SPDK bdev Controller", 00:28:24.584 "max_namespaces": 32, 00:28:24.584 "min_cntlid": 1, 00:28:24.584 "max_cntlid": 65519, 00:28:24.584 "namespaces": [ 00:28:24.584 { 00:28:24.584 "nsid": 1, 00:28:24.584 "bdev_name": "Malloc0", 00:28:24.584 "name": "Malloc0", 00:28:24.584 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:24.584 "eui64": "ABCDEF0123456789", 00:28:24.584 "uuid": "f821392e-4181-4a40-aef5-9c43cb5d5e38" 00:28:24.584 } 00:28:24.584 ] 00:28:24.584 } 00:28:24.584 ] 00:28:24.584 13:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.584 13:39:22 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:24.849 [2024-07-26 13:39:22.057239] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.849 [2024-07-26 13:39:22.057293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113578 ] 00:28:24.849 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.849 [2024-07-26 13:39:22.091817] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:24.849 [2024-07-26 13:39:22.091860] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:24.849 [2024-07-26 13:39:22.091865] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:24.849 [2024-07-26 13:39:22.091877] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:24.849 [2024-07-26 13:39:22.091884] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:24.849 [2024-07-26 13:39:22.092407] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:24.849 [2024-07-26 13:39:22.092438] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa7b470 0 00:28:24.849 [2024-07-26 13:39:22.103212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:24.849 [2024-07-26 13:39:22.103228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:24.849 [2024-07-26 13:39:22.103232] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:24.849 [2024-07-26 13:39:22.103236] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:24.849 [2024-07-26 13:39:22.103272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.103277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.103282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.849 [2024-07-26 13:39:22.103296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:24.849 [2024-07-26 13:39:22.103313] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.849 [2024-07-26 13:39:22.111210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.849 [2024-07-26 13:39:22.111219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.849 [2024-07-26 13:39:22.111223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.849 [2024-07-26 13:39:22.111237] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:24.849 [2024-07-26 13:39:22.111243] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:24.849 [2024-07-26 13:39:22.111248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:24.849 [2024-07-26 13:39:22.111260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.849 [2024-07-26 13:39:22.111275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.849 [2024-07-26 13:39:22.111287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.849 [2024-07-26 13:39:22.111469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.849 [2024-07-26 13:39:22.111478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.849 [2024-07-26 13:39:22.111482] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111486] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.849 [2024-07-26 13:39:22.111492] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:24.849 [2024-07-26 13:39:22.111500] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:24.849 [2024-07-26 13:39:22.111511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111515] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.849 [2024-07-26 13:39:22.111525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.849 [2024-07-26 13:39:22.111537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.849 [2024-07-26 13:39:22.111680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.849 [2024-07-26 13:39:22.111686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.849 [2024-07-26 13:39:22.111690] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.849 [2024-07-26 13:39:22.111700] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:24.849 [2024-07-26 13:39:22.111707] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:24.849 [2024-07-26 13:39:22.111714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.849 [2024-07-26 13:39:22.111728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.849 [2024-07-26 13:39:22.111739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.849 [2024-07-26 13:39:22.111919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.849 [2024-07-26 13:39:22.111925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.849 [2024-07-26 13:39:22.111929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.849 [2024-07-26 13:39:22.111938] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:24.849 [2024-07-26 13:39:22.111947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.849 [2024-07-26 13:39:22.111954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.849 [2024-07-26 13:39:22.111961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.849 [2024-07-26 13:39:22.111971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.849 [2024-07-26 13:39:22.112159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.850 [2024-07-26 13:39:22.112165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.850 [2024-07-26 13:39:22.112169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.850 [2024-07-26 13:39:22.112177] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:24.850 [2024-07-26 13:39:22.112182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:24.850 [2024-07-26 13:39:22.112189] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:24.850 [2024-07-26 13:39:22.112297] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:24.850 [2024-07-26 13:39:22.112303] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:24.850 [2024-07-26 13:39:22.112311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.112325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.850 [2024-07-26 13:39:22.112336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.850 [2024-07-26 13:39:22.112620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.850 [2024-07-26 13:39:22.112626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.850 [2024-07-26 13:39:22.112630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.850 [2024-07-26 13:39:22.112639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:24.850 [2024-07-26 13:39:22.112648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.112662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.850 [2024-07-26 13:39:22.112672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.850 [2024-07-26 13:39:22.112828] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.850 [2024-07-26 13:39:22.112835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.850 [2024-07-26 13:39:22.112838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.850 [2024-07-26 13:39:22.112846] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:24.850 [2024-07-26 13:39:22.112851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.112859] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:24.850 [2024-07-26 13:39:22.112867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.112876] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.112883] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.112890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.850 [2024-07-26 13:39:22.112900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.850 [2024-07-26 13:39:22.113095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.850 [2024-07-26 13:39:22.113102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.850 [2024-07-26 13:39:22.113108] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113113] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7b470): datao=0, datal=4096, cccid=0 00:28:24.850 [2024-07-26 13:39:22.113117] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae4240) on tqpair(0xa7b470): expected_datao=0, payload_size=4096 00:28:24.850 [2024-07-26 13:39:22.113126] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113130] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113297] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.850 [2024-07-26 13:39:22.113304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.850 [2024-07-26 13:39:22.113308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.850 [2024-07-26 13:39:22.113320] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:24.850 [2024-07-26 13:39:22.113324] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:24.850 [2024-07-26 13:39:22.113329] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:24.850 [2024-07-26 13:39:22.113334] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:24.850 [2024-07-26 13:39:22.113338] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:24.850 [2024-07-26 13:39:22.113343] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.113354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.113361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.850 [2024-07-26 13:39:22.113389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.850 [2024-07-26 13:39:22.113578] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.850 [2024-07-26 13:39:22.113584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.850 [2024-07-26 13:39:22.113587] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4240) on tqpair=0xa7b470 00:28:24.850 [2024-07-26 13:39:22.113599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113606] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.850 [2024-07-26 13:39:22.113618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.850 [2024-07-26 13:39:22.113636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.850 [2024-07-26 13:39:22.113657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.850 [2024-07-26 13:39:22.113674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.113685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:24.850 [2024-07-26 13:39:22.113692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.850 [2024-07-26 13:39:22.113698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7b470) 00:28:24.850 [2024-07-26 13:39:22.113705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.850 [2024-07-26 13:39:22.113718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4240, cid 0, qid 0 00:28:24.850 [2024-07-26 13:39:22.113724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae43a0, cid 1, qid 0 00:28:24.850 [2024-07-26 13:39:22.113728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4500, cid 2, qid 0 00:28:24.850 [2024-07-26 13:39:22.113733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.850 [2024-07-26 13:39:22.113737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae47c0, cid 4, qid 0 00:28:24.850 [2024-07-26 13:39:22.114082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.114088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.114092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae47c0) on tqpair=0xa7b470 00:28:24.851 [2024-07-26 13:39:22.114101] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:24.851 [2024-07-26 13:39:22.114106] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:24.851 [2024-07-26 13:39:22.114116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7b470) 00:28:24.851 [2024-07-26 13:39:22.114130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.851 [2024-07-26 13:39:22.114140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae47c0, cid 4, qid 0 00:28:24.851 [2024-07-26 13:39:22.114453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.851 [2024-07-26 13:39:22.114461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.851 [2024-07-26 13:39:22.114464] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114468] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7b470): datao=0, datal=4096, cccid=4 00:28:24.851 [2024-07-26 13:39:22.114476] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae47c0) on tqpair(0xa7b470): expected_datao=0, payload_size=4096 00:28:24.851 [2024-07-26 13:39:22.114484] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114488] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114642] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.114648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.114652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae47c0) on tqpair=0xa7b470 00:28:24.851 [2024-07-26 13:39:22.114667] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:24.851 [2024-07-26 13:39:22.114690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7b470) 00:28:24.851 [2024-07-26 13:39:22.114704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.851 [2024-07-26 13:39:22.114711] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa7b470) 00:28:24.851 [2024-07-26 13:39:22.114724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.851 [2024-07-26 13:39:22.114740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae47c0, cid 4, qid 0 00:28:24.851 [2024-07-26 13:39:22.114745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4920, cid 5, qid 0 00:28:24.851 [2024-07-26 13:39:22.114966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.851 [2024-07-26 13:39:22.114973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.851 [2024-07-26 13:39:22.114977] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114980] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7b470): datao=0, datal=1024, cccid=4 00:28:24.851 [2024-07-26 13:39:22.114985] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae47c0) on tqpair(0xa7b470): expected_datao=0, payload_size=1024 00:28:24.851 [2024-07-26 13:39:22.114991] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.114995] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.115001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.115006] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.115010] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.115013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4920) on tqpair=0xa7b470 00:28:24.851 [2024-07-26 13:39:22.159207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.159216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.159219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae47c0) on tqpair=0xa7b470 00:28:24.851 [2024-07-26 13:39:22.159234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7b470) 00:28:24.851 [2024-07-26 13:39:22.159247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.851 [2024-07-26 13:39:22.159266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae47c0, cid 4, qid 0 00:28:24.851 [2024-07-26 13:39:22.159512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.851 [2024-07-26 13:39:22.159520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.851 [2024-07-26 13:39:22.159524] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159527] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7b470): datao=0, datal=3072, cccid=4 00:28:24.851 [2024-07-26 13:39:22.159532] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae47c0) on tqpair(0xa7b470): expected_datao=0, payload_size=3072 00:28:24.851 [2024-07-26 13:39:22.159539] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159543] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.159776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.159779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae47c0) on tqpair=0xa7b470 00:28:24.851 [2024-07-26 13:39:22.159792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.159799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7b470) 00:28:24.851 [2024-07-26 13:39:22.159805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.851 [2024-07-26 13:39:22.159819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae47c0, cid 4, qid 0 00:28:24.851 [2024-07-26 13:39:22.160023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.851 [2024-07-26 13:39:22.160029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.851 [2024-07-26 13:39:22.160033] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.160036] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7b470): datao=0, datal=8, cccid=4 00:28:24.851 [2024-07-26 13:39:22.160041] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae47c0) on tqpair(0xa7b470): expected_datao=0, payload_size=8 00:28:24.851 [2024-07-26 13:39:22.160048] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.160051] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.200370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.851 [2024-07-26 13:39:22.200382] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.851 [2024-07-26 13:39:22.200386] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.851 [2024-07-26 13:39:22.200390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae47c0) on tqpair=0xa7b470 00:28:24.851 ===================================================== 00:28:24.851 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:24.851 ===================================================== 00:28:24.851 Controller Capabilities/Features 00:28:24.851 ================================ 00:28:24.851 Vendor ID: 0000 00:28:24.851 Subsystem Vendor ID: 0000 00:28:24.851 Serial Number: .................... 00:28:24.851 Model Number: ........................................ 00:28:24.851 Firmware Version: 24.01.1 00:28:24.851 Recommended Arb Burst: 0 00:28:24.851 IEEE OUI Identifier: 00 00 00 00:28:24.851 Multi-path I/O 00:28:24.851 May have multiple subsystem ports: No 00:28:24.851 May have multiple controllers: No 00:28:24.851 Associated with SR-IOV VF: No 00:28:24.851 Max Data Transfer Size: 131072 00:28:24.851 Max Number of Namespaces: 0 00:28:24.851 Max Number of I/O Queues: 1024 00:28:24.851 NVMe Specification Version (VS): 1.3 00:28:24.851 NVMe Specification Version (Identify): 1.3 00:28:24.851 Maximum Queue Entries: 128 00:28:24.851 Contiguous Queues Required: Yes 00:28:24.851 Arbitration Mechanisms Supported 00:28:24.851 Weighted Round Robin: Not Supported 00:28:24.851 Vendor Specific: Not Supported 00:28:24.851 Reset Timeout: 15000 ms 00:28:24.851 Doorbell Stride: 4 bytes 00:28:24.851 NVM Subsystem Reset: Not Supported 00:28:24.851 Command Sets Supported 00:28:24.851 NVM Command Set: Supported 00:28:24.851 Boot Partition: Not Supported 00:28:24.851 Memory Page Size Minimum: 4096 bytes 00:28:24.852 Memory Page Size Maximum: 4096 bytes 00:28:24.852 Persistent Memory Region: Not Supported 00:28:24.852 Optional Asynchronous Events Supported 00:28:24.852 Namespace Attribute Notices: Not Supported 00:28:24.852 Firmware Activation Notices: Not Supported 00:28:24.852 ANA Change Notices: Not Supported 00:28:24.852 PLE Aggregate Log Change Notices: Not Supported 00:28:24.852 LBA Status Info Alert Notices: Not Supported 00:28:24.852 EGE Aggregate Log Change Notices: Not Supported 00:28:24.852 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.852 Zone Descriptor Change Notices: Not Supported 00:28:24.852 Discovery Log Change Notices: Supported 00:28:24.852 Controller Attributes 00:28:24.852 128-bit Host Identifier: Not Supported 00:28:24.852 Non-Operational Permissive Mode: Not Supported 00:28:24.852 NVM Sets: Not Supported 00:28:24.852 Read Recovery Levels: Not Supported 00:28:24.852 Endurance Groups: Not Supported 00:28:24.852 Predictable Latency Mode: Not Supported 00:28:24.852 Traffic Based Keep ALive: Not Supported 00:28:24.852 Namespace Granularity: Not Supported 00:28:24.852 SQ Associations: Not Supported 00:28:24.852 UUID List: Not Supported 00:28:24.852 Multi-Domain Subsystem: Not Supported 00:28:24.852 Fixed Capacity Management: Not Supported 00:28:24.852 Variable Capacity Management: Not Supported 00:28:24.852 Delete Endurance Group: Not Supported 00:28:24.852 Delete NVM Set: Not Supported 00:28:24.852 Extended LBA Formats Supported: Not Supported 00:28:24.852 Flexible Data Placement Supported: Not Supported 00:28:24.852 00:28:24.852 Controller Memory Buffer Support 00:28:24.852 ================================ 00:28:24.852 Supported: No 00:28:24.852 00:28:24.852 Persistent Memory Region Support 00:28:24.852 ================================ 00:28:24.852 Supported: No 00:28:24.852 00:28:24.852 Admin Command Set Attributes 00:28:24.852 ============================ 00:28:24.852 Security Send/Receive: Not Supported 00:28:24.852 Format NVM: Not Supported 00:28:24.852 Firmware Activate/Download: Not Supported 00:28:24.852 Namespace Management: Not Supported 00:28:24.852 Device Self-Test: Not Supported 00:28:24.852 Directives: Not Supported 00:28:24.852 NVMe-MI: Not Supported 00:28:24.852 Virtualization Management: Not Supported 00:28:24.852 Doorbell Buffer Config: Not Supported 00:28:24.852 Get LBA Status Capability: Not Supported 00:28:24.852 Command & Feature Lockdown Capability: Not Supported 00:28:24.852 Abort Command Limit: 1 00:28:24.852 Async Event Request Limit: 4 00:28:24.852 Number of Firmware Slots: N/A 00:28:24.852 Firmware Slot 1 Read-Only: N/A 00:28:24.852 Firmware Activation Without Reset: N/A 00:28:24.852 Multiple Update Detection Support: N/A 00:28:24.852 Firmware Update Granularity: No Information Provided 00:28:24.852 Per-Namespace SMART Log: No 00:28:24.852 Asymmetric Namespace Access Log Page: Not Supported 00:28:24.852 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:24.852 Command Effects Log Page: Not Supported 00:28:24.852 Get Log Page Extended Data: Supported 00:28:24.852 Telemetry Log Pages: Not Supported 00:28:24.852 Persistent Event Log Pages: Not Supported 00:28:24.852 Supported Log Pages Log Page: May Support 00:28:24.852 Commands Supported & Effects Log Page: Not Supported 00:28:24.852 Feature Identifiers & Effects Log Page:May Support 00:28:24.852 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.852 Data Area 4 for Telemetry Log: Not Supported 00:28:24.852 Error Log Page Entries Supported: 128 00:28:24.852 Keep Alive: Not Supported 00:28:24.852 00:28:24.852 NVM Command Set Attributes 00:28:24.852 ========================== 00:28:24.852 Submission Queue Entry Size 00:28:24.852 Max: 1 00:28:24.852 Min: 1 00:28:24.852 Completion Queue Entry Size 00:28:24.852 Max: 1 00:28:24.852 Min: 1 00:28:24.852 Number of Namespaces: 0 00:28:24.852 Compare Command: Not Supported 00:28:24.852 Write Uncorrectable Command: Not Supported 00:28:24.852 Dataset Management Command: Not Supported 00:28:24.852 Write Zeroes Command: Not Supported 00:28:24.852 Set Features Save Field: Not Supported 00:28:24.852 Reservations: Not Supported 00:28:24.852 Timestamp: Not Supported 00:28:24.852 Copy: Not Supported 00:28:24.852 Volatile Write Cache: Not Present 00:28:24.852 Atomic Write Unit (Normal): 1 00:28:24.852 Atomic Write Unit (PFail): 1 00:28:24.852 Atomic Compare & Write Unit: 1 00:28:24.852 Fused Compare & Write: Supported 00:28:24.852 Scatter-Gather List 00:28:24.852 SGL Command Set: Supported 00:28:24.852 SGL Keyed: Supported 00:28:24.852 SGL Bit Bucket Descriptor: Not Supported 00:28:24.852 SGL Metadata Pointer: Not Supported 00:28:24.852 Oversized SGL: Not Supported 00:28:24.852 SGL Metadata Address: Not Supported 00:28:24.852 SGL Offset: Supported 00:28:24.852 Transport SGL Data Block: Not Supported 00:28:24.852 Replay Protected Memory Block: Not Supported 00:28:24.852 00:28:24.852 Firmware Slot Information 00:28:24.852 ========================= 00:28:24.852 Active slot: 0 00:28:24.852 00:28:24.852 00:28:24.852 Error Log 00:28:24.852 ========= 00:28:24.852 00:28:24.852 Active Namespaces 00:28:24.852 ================= 00:28:24.852 Discovery Log Page 00:28:24.852 ================== 00:28:24.852 Generation Counter: 2 00:28:24.852 Number of Records: 2 00:28:24.852 Record Format: 0 00:28:24.852 00:28:24.852 Discovery Log Entry 0 00:28:24.852 ---------------------- 00:28:24.852 Transport Type: 3 (TCP) 00:28:24.852 Address Family: 1 (IPv4) 00:28:24.852 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:24.852 Entry Flags: 00:28:24.852 Duplicate Returned Information: 1 00:28:24.852 Explicit Persistent Connection Support for Discovery: 1 00:28:24.852 Transport Requirements: 00:28:24.852 Secure Channel: Not Required 00:28:24.852 Port ID: 0 (0x0000) 00:28:24.852 Controller ID: 65535 (0xffff) 00:28:24.852 Admin Max SQ Size: 128 00:28:24.852 Transport Service Identifier: 4420 00:28:24.852 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:24.852 Transport Address: 10.0.0.2 00:28:24.852 Discovery Log Entry 1 00:28:24.852 ---------------------- 00:28:24.852 Transport Type: 3 (TCP) 00:28:24.852 Address Family: 1 (IPv4) 00:28:24.852 Subsystem Type: 2 (NVM Subsystem) 00:28:24.852 Entry Flags: 00:28:24.852 Duplicate Returned Information: 0 00:28:24.852 Explicit Persistent Connection Support for Discovery: 0 00:28:24.852 Transport Requirements: 00:28:24.852 Secure Channel: Not Required 00:28:24.852 Port ID: 0 (0x0000) 00:28:24.852 Controller ID: 65535 (0xffff) 00:28:24.852 Admin Max SQ Size: 128 00:28:24.852 Transport Service Identifier: 4420 00:28:24.852 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:24.852 Transport Address: 10.0.0.2 [2024-07-26 13:39:22.200476] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:24.852 [2024-07-26 13:39:22.200490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.852 [2024-07-26 13:39:22.200497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.852 [2024-07-26 13:39:22.200503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.852 [2024-07-26 13:39:22.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.852 [2024-07-26 13:39:22.200517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.852 [2024-07-26 13:39:22.200522] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.852 [2024-07-26 13:39:22.200526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.852 [2024-07-26 13:39:22.200533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.852 [2024-07-26 13:39:22.200548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.852 [2024-07-26 13:39:22.200845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.852 [2024-07-26 13:39:22.200853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.852 [2024-07-26 13:39:22.200857] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.852 [2024-07-26 13:39:22.200860] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.200868] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.200871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.200875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.200882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.200896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.201092] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.201098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.201101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.201110] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:24.853 [2024-07-26 13:39:22.201114] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:24.853 [2024-07-26 13:39:22.201123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201127] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.201137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.201148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.201334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.201341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.201345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.201359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.201373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.201384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.201576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.201582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.201586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201592] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.201602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.201616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.201626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.201818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.201825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.201828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.201841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.201848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.201855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.201865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.202074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.202080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.202083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.202096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.202110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.202120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.202293] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.202300] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.202304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.202317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.202331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.202342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.202509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.202516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.202519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.202535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.202549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.202560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.202728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.202735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.202738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.202751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.202765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.202775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.202952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.202958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.202961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.202974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202978] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.202981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.202988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.202998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.203172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.203178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.203181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.203185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.853 [2024-07-26 13:39:22.203194] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.203198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.853 [2024-07-26 13:39:22.207209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7b470) 00:28:24.853 [2024-07-26 13:39:22.207218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.853 [2024-07-26 13:39:22.207231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae4660, cid 3, qid 0 00:28:24.853 [2024-07-26 13:39:22.207458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.853 [2024-07-26 13:39:22.207465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.853 [2024-07-26 13:39:22.207469] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.207473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae4660) on tqpair=0xa7b470 00:28:24.854 [2024-07-26 13:39:22.207481] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:24.854 00:28:24.854 13:39:22 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:24.854 [2024-07-26 13:39:22.243816] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.854 [2024-07-26 13:39:22.243857] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113580 ] 00:28:24.854 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.854 [2024-07-26 13:39:22.277763] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:24.854 [2024-07-26 13:39:22.277807] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:24.854 [2024-07-26 13:39:22.277812] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:24.854 [2024-07-26 13:39:22.277822] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:24.854 [2024-07-26 13:39:22.277828] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:24.854 [2024-07-26 13:39:22.281230] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:24.854 [2024-07-26 13:39:22.281269] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2252470 0 00:28:24.854 [2024-07-26 13:39:22.289212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:24.854 [2024-07-26 13:39:22.289226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:24.854 [2024-07-26 13:39:22.289230] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:24.854 [2024-07-26 13:39:22.289233] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:24.854 [2024-07-26 13:39:22.289264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.289269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.289273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.854 [2024-07-26 13:39:22.289284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:24.854 [2024-07-26 13:39:22.289301] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.854 [2024-07-26 13:39:22.297209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.854 [2024-07-26 13:39:22.297218] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.854 [2024-07-26 13:39:22.297222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.854 [2024-07-26 13:39:22.297239] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:24.854 [2024-07-26 13:39:22.297245] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:24.854 [2024-07-26 13:39:22.297250] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:24.854 [2024-07-26 13:39:22.297261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297268] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.854 [2024-07-26 13:39:22.297276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.854 [2024-07-26 13:39:22.297292] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.854 [2024-07-26 13:39:22.297529] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.854 [2024-07-26 13:39:22.297538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.854 [2024-07-26 13:39:22.297542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.854 [2024-07-26 13:39:22.297553] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:24.854 [2024-07-26 13:39:22.297561] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:24.854 [2024-07-26 13:39:22.297568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.854 [2024-07-26 13:39:22.297583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.854 [2024-07-26 13:39:22.297595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.854 [2024-07-26 13:39:22.297821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.854 [2024-07-26 13:39:22.297828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.854 [2024-07-26 13:39:22.297831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.854 [2024-07-26 13:39:22.297841] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:24.854 [2024-07-26 13:39:22.297849] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:24.854 [2024-07-26 13:39:22.297856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.297864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.854 [2024-07-26 13:39:22.297871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.854 [2024-07-26 13:39:22.297882] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.854 [2024-07-26 13:39:22.298120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.854 [2024-07-26 13:39:22.298127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.854 [2024-07-26 13:39:22.298130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.298134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.854 [2024-07-26 13:39:22.298140] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:24.854 [2024-07-26 13:39:22.298150] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.298154] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.854 [2024-07-26 13:39:22.298157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.854 [2024-07-26 13:39:22.298164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.854 [2024-07-26 13:39:22.298175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.854 [2024-07-26 13:39:22.298416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.298423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.298430] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.855 [2024-07-26 13:39:22.298439] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:24.855 [2024-07-26 13:39:22.298444] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:24.855 [2024-07-26 13:39:22.298451] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:24.855 [2024-07-26 13:39:22.298557] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:24.855 [2024-07-26 13:39:22.298560] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:24.855 [2024-07-26 13:39:22.298568] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298572] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.298582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.855 [2024-07-26 13:39:22.298594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.855 [2024-07-26 13:39:22.298837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.298843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.298846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.855 [2024-07-26 13:39:22.298856] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:24.855 [2024-07-26 13:39:22.298865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.298872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.298879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.855 [2024-07-26 13:39:22.298890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.855 [2024-07-26 13:39:22.299134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.299141] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.299144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.855 [2024-07-26 13:39:22.299153] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:24.855 [2024-07-26 13:39:22.299158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.299165] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:24.855 [2024-07-26 13:39:22.299175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.299182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.299199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.855 [2024-07-26 13:39:22.299216] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.855 [2024-07-26 13:39:22.299466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.855 [2024-07-26 13:39:22.299473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.855 [2024-07-26 13:39:22.299477] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299481] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=4096, cccid=0 00:28:24.855 [2024-07-26 13:39:22.299486] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb240) on tqpair(0x2252470): expected_datao=0, payload_size=4096 00:28:24.855 [2024-07-26 13:39:22.299560] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299566] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.299788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.299791] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299795] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.855 [2024-07-26 13:39:22.299803] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:24.855 [2024-07-26 13:39:22.299808] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:24.855 [2024-07-26 13:39:22.299813] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:24.855 [2024-07-26 13:39:22.299817] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:24.855 [2024-07-26 13:39:22.299821] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:24.855 [2024-07-26 13:39:22.299826] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.299838] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.299845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.299852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.299860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.855 [2024-07-26 13:39:22.299872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.855 [2024-07-26 13:39:22.300082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.300089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.300092] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb240) on tqpair=0x2252470 00:28:24.855 [2024-07-26 13:39:22.300104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.300118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.855 [2024-07-26 13:39:22.300127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.300140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.855 [2024-07-26 13:39:22.300145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.300158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.855 [2024-07-26 13:39:22.300164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.300176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.855 [2024-07-26 13:39:22.300181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.300192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:24.855 [2024-07-26 13:39:22.300198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:24.855 [2024-07-26 13:39:22.300220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.855 [2024-07-26 13:39:22.300234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb240, cid 0, qid 0 00:28:24.855 [2024-07-26 13:39:22.300239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb3a0, cid 1, qid 0 00:28:24.855 [2024-07-26 13:39:22.300244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb500, cid 2, qid 0 00:28:24.855 [2024-07-26 13:39:22.300249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:24.855 [2024-07-26 13:39:22.300253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:24.855 [2024-07-26 13:39:22.300503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.855 [2024-07-26 13:39:22.300510] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.855 [2024-07-26 13:39:22.300513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.855 [2024-07-26 13:39:22.300517] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:24.856 [2024-07-26 13:39:22.300523] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:24.856 [2024-07-26 13:39:22.300528] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:24.856 [2024-07-26 13:39:22.300537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:24.856 [2024-07-26 13:39:22.300546] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:24.856 [2024-07-26 13:39:22.300553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.300559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.300562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:24.856 [2024-07-26 13:39:22.300569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.856 [2024-07-26 13:39:22.300581] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:24.856 [2024-07-26 13:39:22.300802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.856 [2024-07-26 13:39:22.300809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.856 [2024-07-26 13:39:22.300812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.300816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:24.856 [2024-07-26 13:39:22.300880] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:24.856 [2024-07-26 13:39:22.300889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:24.856 [2024-07-26 13:39:22.300897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.300900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.300903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:24.856 [2024-07-26 13:39:22.300910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.856 [2024-07-26 13:39:22.300921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:24.856 [2024-07-26 13:39:22.301195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.856 [2024-07-26 13:39:22.305208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.856 [2024-07-26 13:39:22.305213] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.305217] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=4096, cccid=4 00:28:24.856 [2024-07-26 13:39:22.305221] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb7c0) on tqpair(0x2252470): expected_datao=0, payload_size=4096 00:28:24.856 [2024-07-26 13:39:22.305229] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.856 [2024-07-26 13:39:22.305232] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.119 [2024-07-26 13:39:22.345218] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.119 [2024-07-26 13:39:22.345222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:25.119 [2024-07-26 13:39:22.345239] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:25.119 [2024-07-26 13:39:22.345253] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:25.119 [2024-07-26 13:39:22.345261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:25.119 [2024-07-26 13:39:22.345268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:25.119 [2024-07-26 13:39:22.345282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.119 [2024-07-26 13:39:22.345295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:25.119 [2024-07-26 13:39:22.345547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.119 [2024-07-26 13:39:22.345555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.119 [2024-07-26 13:39:22.345559] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345562] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=4096, cccid=4 00:28:25.119 [2024-07-26 13:39:22.345567] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb7c0) on tqpair(0x2252470): expected_datao=0, payload_size=4096 00:28:25.119 [2024-07-26 13:39:22.345762] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.345766] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.387579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.119 [2024-07-26 13:39:22.387588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.119 [2024-07-26 13:39:22.387591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.387595] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:25.119 [2024-07-26 13:39:22.387610] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:25.119 [2024-07-26 13:39:22.387620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:25.119 [2024-07-26 13:39:22.387627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.387631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.119 [2024-07-26 13:39:22.387634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.387641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.387653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:25.120 [2024-07-26 13:39:22.387904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.120 [2024-07-26 13:39:22.387912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.120 [2024-07-26 13:39:22.387916] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.387919] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=4096, cccid=4 00:28:25.120 [2024-07-26 13:39:22.387924] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb7c0) on tqpair(0x2252470): expected_datao=0, payload_size=4096 00:28:25.120 [2024-07-26 13:39:22.387931] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.387935] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.433219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.433222] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.433235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433243] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433271] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:25.120 [2024-07-26 13:39:22.433275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:25.120 [2024-07-26 13:39:22.433280] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:25.120 [2024-07-26 13:39:22.433294] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433297] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433301] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.433308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.433314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.433327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.120 [2024-07-26 13:39:22.433341] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:25.120 [2024-07-26 13:39:22.433347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb920, cid 5, qid 0 00:28:25.120 [2024-07-26 13:39:22.433493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.433500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.433503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.433514] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.433520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.433524] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433527] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb920) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.433537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.433551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.433562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb920, cid 5, qid 0 00:28:25.120 [2024-07-26 13:39:22.433823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.433830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.433833] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb920) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.433846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.433853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.433859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.433872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb920, cid 5, qid 0 00:28:25.120 [2024-07-26 13:39:22.434125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.434131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.434134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434138] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb920) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.434147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.434161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.434171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb920, cid 5, qid 0 00:28:25.120 [2024-07-26 13:39:22.434391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.120 [2024-07-26 13:39:22.434398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.120 [2024-07-26 13:39:22.434401] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb920) on tqpair=0x2252470 00:28:25.120 [2024-07-26 13:39:22.434417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.434431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.434438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.434451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.434458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.434471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.434477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2252470) 00:28:25.120 [2024-07-26 13:39:22.434490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.120 [2024-07-26 13:39:22.434502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb920, cid 5, qid 0 00:28:25.120 [2024-07-26 13:39:22.434507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb7c0, cid 4, qid 0 00:28:25.120 [2024-07-26 13:39:22.434512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bba80, cid 6, qid 0 00:28:25.120 [2024-07-26 13:39:22.434517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bbbe0, cid 7, qid 0 00:28:25.120 [2024-07-26 13:39:22.434787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.120 [2024-07-26 13:39:22.434794] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.120 [2024-07-26 13:39:22.434797] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434801] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=8192, cccid=5 00:28:25.120 [2024-07-26 13:39:22.434806] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb920) on tqpair(0x2252470): expected_datao=0, payload_size=8192 00:28:25.120 [2024-07-26 13:39:22.434912] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434916] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.120 [2024-07-26 13:39:22.434922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.120 [2024-07-26 13:39:22.434927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.121 [2024-07-26 13:39:22.434931] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434934] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=512, cccid=4 00:28:25.121 [2024-07-26 13:39:22.434939] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bb7c0) on tqpair(0x2252470): expected_datao=0, payload_size=512 00:28:25.121 [2024-07-26 13:39:22.434946] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434949] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.121 [2024-07-26 13:39:22.434960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.121 [2024-07-26 13:39:22.434964] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434967] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=512, cccid=6 00:28:25.121 [2024-07-26 13:39:22.434971] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bba80) on tqpair(0x2252470): expected_datao=0, payload_size=512 00:28:25.121 [2024-07-26 13:39:22.434978] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434982] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.434988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:25.121 [2024-07-26 13:39:22.434993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:25.121 [2024-07-26 13:39:22.434996] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.435000] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2252470): datao=0, datal=4096, cccid=7 00:28:25.121 [2024-07-26 13:39:22.435004] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22bbbe0) on tqpair(0x2252470): expected_datao=0, payload_size=4096 00:28:25.121 [2024-07-26 13:39:22.435197] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.435207] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.475440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.121 [2024-07-26 13:39:22.475452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.121 [2024-07-26 13:39:22.475455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.475459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb920) on tqpair=0x2252470 00:28:25.121 [2024-07-26 13:39:22.475475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.121 [2024-07-26 13:39:22.475481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.121 [2024-07-26 13:39:22.475484] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.475488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb7c0) on tqpair=0x2252470 00:28:25.121 [2024-07-26 13:39:22.475497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.121 [2024-07-26 13:39:22.475503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.121 [2024-07-26 13:39:22.475508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.475512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bba80) on tqpair=0x2252470 00:28:25.121 [2024-07-26 13:39:22.475520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.121 [2024-07-26 13:39:22.475526] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.121 [2024-07-26 13:39:22.475529] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.121 [2024-07-26 13:39:22.475533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bbbe0) on tqpair=0x2252470 00:28:25.121 ===================================================== 00:28:25.121 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.121 ===================================================== 00:28:25.121 Controller Capabilities/Features 00:28:25.121 ================================ 00:28:25.121 Vendor ID: 8086 00:28:25.121 Subsystem Vendor ID: 8086 00:28:25.121 Serial Number: SPDK00000000000001 00:28:25.121 Model Number: SPDK bdev Controller 00:28:25.121 Firmware Version: 24.01.1 00:28:25.121 Recommended Arb Burst: 6 00:28:25.121 IEEE OUI Identifier: e4 d2 5c 00:28:25.121 Multi-path I/O 00:28:25.121 May have multiple subsystem ports: Yes 00:28:25.121 May have multiple controllers: Yes 00:28:25.121 Associated with SR-IOV VF: No 00:28:25.121 Max Data Transfer Size: 131072 00:28:25.121 Max Number of Namespaces: 32 00:28:25.121 Max Number of I/O Queues: 127 00:28:25.121 NVMe Specification Version (VS): 1.3 00:28:25.121 NVMe Specification Version (Identify): 1.3 00:28:25.121 Maximum Queue Entries: 128 00:28:25.121 Contiguous Queues Required: Yes 00:28:25.121 Arbitration Mechanisms Supported 00:28:25.121 Weighted Round Robin: Not Supported 00:28:25.121 Vendor Specific: Not Supported 00:28:25.121 Reset Timeout: 15000 ms 00:28:25.121 Doorbell Stride: 4 bytes 00:28:25.121 NVM Subsystem Reset: Not Supported 00:28:25.121 Command Sets Supported 00:28:25.121 NVM Command Set: Supported 00:28:25.121 Boot Partition: Not Supported 00:28:25.121 Memory Page Size Minimum: 4096 bytes 00:28:25.121 Memory Page Size Maximum: 4096 bytes 00:28:25.121 Persistent Memory Region: Not Supported 00:28:25.121 Optional Asynchronous Events Supported 00:28:25.121 Namespace Attribute Notices: Supported 00:28:25.121 Firmware Activation Notices: Not Supported 00:28:25.121 ANA Change Notices: Not Supported 00:28:25.121 PLE Aggregate Log Change Notices: Not Supported 00:28:25.121 LBA Status Info Alert Notices: Not Supported 00:28:25.121 EGE Aggregate Log Change Notices: Not Supported 00:28:25.121 Normal NVM Subsystem Shutdown event: Not Supported 00:28:25.121 Zone Descriptor Change Notices: Not Supported 00:28:25.121 Discovery Log Change Notices: Not Supported 00:28:25.121 Controller Attributes 00:28:25.121 128-bit Host Identifier: Supported 00:28:25.121 Non-Operational Permissive Mode: Not Supported 00:28:25.121 NVM Sets: Not Supported 00:28:25.121 Read Recovery Levels: Not Supported 00:28:25.121 Endurance Groups: Not Supported 00:28:25.121 Predictable Latency Mode: Not Supported 00:28:25.121 Traffic Based Keep ALive: Not Supported 00:28:25.121 Namespace Granularity: Not Supported 00:28:25.121 SQ Associations: Not Supported 00:28:25.121 UUID List: Not Supported 00:28:25.121 Multi-Domain Subsystem: Not Supported 00:28:25.121 Fixed Capacity Management: Not Supported 00:28:25.121 Variable Capacity Management: Not Supported 00:28:25.121 Delete Endurance Group: Not Supported 00:28:25.121 Delete NVM Set: Not Supported 00:28:25.121 Extended LBA Formats Supported: Not Supported 00:28:25.121 Flexible Data Placement Supported: Not Supported 00:28:25.121 00:28:25.121 Controller Memory Buffer Support 00:28:25.121 ================================ 00:28:25.121 Supported: No 00:28:25.121 00:28:25.121 Persistent Memory Region Support 00:28:25.121 ================================ 00:28:25.121 Supported: No 00:28:25.121 00:28:25.121 Admin Command Set Attributes 00:28:25.121 ============================ 00:28:25.121 Security Send/Receive: Not Supported 00:28:25.121 Format NVM: Not Supported 00:28:25.121 Firmware Activate/Download: Not Supported 00:28:25.121 Namespace Management: Not Supported 00:28:25.121 Device Self-Test: Not Supported 00:28:25.121 Directives: Not Supported 00:28:25.121 NVMe-MI: Not Supported 00:28:25.121 Virtualization Management: Not Supported 00:28:25.121 Doorbell Buffer Config: Not Supported 00:28:25.121 Get LBA Status Capability: Not Supported 00:28:25.121 Command & Feature Lockdown Capability: Not Supported 00:28:25.121 Abort Command Limit: 4 00:28:25.121 Async Event Request Limit: 4 00:28:25.121 Number of Firmware Slots: N/A 00:28:25.121 Firmware Slot 1 Read-Only: N/A 00:28:25.121 Firmware Activation Without Reset: N/A 00:28:25.121 Multiple Update Detection Support: N/A 00:28:25.121 Firmware Update Granularity: No Information Provided 00:28:25.121 Per-Namespace SMART Log: No 00:28:25.121 Asymmetric Namespace Access Log Page: Not Supported 00:28:25.121 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:25.121 Command Effects Log Page: Supported 00:28:25.121 Get Log Page Extended Data: Supported 00:28:25.121 Telemetry Log Pages: Not Supported 00:28:25.121 Persistent Event Log Pages: Not Supported 00:28:25.121 Supported Log Pages Log Page: May Support 00:28:25.121 Commands Supported & Effects Log Page: Not Supported 00:28:25.121 Feature Identifiers & Effects Log Page:May Support 00:28:25.121 NVMe-MI Commands & Effects Log Page: May Support 00:28:25.121 Data Area 4 for Telemetry Log: Not Supported 00:28:25.121 Error Log Page Entries Supported: 128 00:28:25.121 Keep Alive: Supported 00:28:25.121 Keep Alive Granularity: 10000 ms 00:28:25.121 00:28:25.121 NVM Command Set Attributes 00:28:25.121 ========================== 00:28:25.121 Submission Queue Entry Size 00:28:25.121 Max: 64 00:28:25.121 Min: 64 00:28:25.121 Completion Queue Entry Size 00:28:25.121 Max: 16 00:28:25.121 Min: 16 00:28:25.121 Number of Namespaces: 32 00:28:25.121 Compare Command: Supported 00:28:25.121 Write Uncorrectable Command: Not Supported 00:28:25.121 Dataset Management Command: Supported 00:28:25.121 Write Zeroes Command: Supported 00:28:25.121 Set Features Save Field: Not Supported 00:28:25.121 Reservations: Supported 00:28:25.121 Timestamp: Not Supported 00:28:25.122 Copy: Supported 00:28:25.122 Volatile Write Cache: Present 00:28:25.122 Atomic Write Unit (Normal): 1 00:28:25.122 Atomic Write Unit (PFail): 1 00:28:25.122 Atomic Compare & Write Unit: 1 00:28:25.122 Fused Compare & Write: Supported 00:28:25.122 Scatter-Gather List 00:28:25.122 SGL Command Set: Supported 00:28:25.122 SGL Keyed: Supported 00:28:25.122 SGL Bit Bucket Descriptor: Not Supported 00:28:25.122 SGL Metadata Pointer: Not Supported 00:28:25.122 Oversized SGL: Not Supported 00:28:25.122 SGL Metadata Address: Not Supported 00:28:25.122 SGL Offset: Supported 00:28:25.122 Transport SGL Data Block: Not Supported 00:28:25.122 Replay Protected Memory Block: Not Supported 00:28:25.122 00:28:25.122 Firmware Slot Information 00:28:25.122 ========================= 00:28:25.122 Active slot: 1 00:28:25.122 Slot 1 Firmware Revision: 24.01.1 00:28:25.122 00:28:25.122 00:28:25.122 Commands Supported and Effects 00:28:25.122 ============================== 00:28:25.122 Admin Commands 00:28:25.122 -------------- 00:28:25.122 Get Log Page (02h): Supported 00:28:25.122 Identify (06h): Supported 00:28:25.122 Abort (08h): Supported 00:28:25.122 Set Features (09h): Supported 00:28:25.122 Get Features (0Ah): Supported 00:28:25.122 Asynchronous Event Request (0Ch): Supported 00:28:25.122 Keep Alive (18h): Supported 00:28:25.122 I/O Commands 00:28:25.122 ------------ 00:28:25.122 Flush (00h): Supported LBA-Change 00:28:25.122 Write (01h): Supported LBA-Change 00:28:25.122 Read (02h): Supported 00:28:25.122 Compare (05h): Supported 00:28:25.122 Write Zeroes (08h): Supported LBA-Change 00:28:25.122 Dataset Management (09h): Supported LBA-Change 00:28:25.122 Copy (19h): Supported LBA-Change 00:28:25.122 Unknown (79h): Supported LBA-Change 00:28:25.122 Unknown (7Ah): Supported 00:28:25.122 00:28:25.122 Error Log 00:28:25.122 ========= 00:28:25.122 00:28:25.122 Arbitration 00:28:25.122 =========== 00:28:25.122 Arbitration Burst: 1 00:28:25.122 00:28:25.122 Power Management 00:28:25.122 ================ 00:28:25.122 Number of Power States: 1 00:28:25.122 Current Power State: Power State #0 00:28:25.122 Power State #0: 00:28:25.122 Max Power: 0.00 W 00:28:25.122 Non-Operational State: Operational 00:28:25.122 Entry Latency: Not Reported 00:28:25.122 Exit Latency: Not Reported 00:28:25.122 Relative Read Throughput: 0 00:28:25.122 Relative Read Latency: 0 00:28:25.122 Relative Write Throughput: 0 00:28:25.122 Relative Write Latency: 0 00:28:25.122 Idle Power: Not Reported 00:28:25.122 Active Power: Not Reported 00:28:25.122 Non-Operational Permissive Mode: Not Supported 00:28:25.122 00:28:25.122 Health Information 00:28:25.122 ================== 00:28:25.122 Critical Warnings: 00:28:25.122 Available Spare Space: OK 00:28:25.122 Temperature: OK 00:28:25.122 Device Reliability: OK 00:28:25.122 Read Only: No 00:28:25.122 Volatile Memory Backup: OK 00:28:25.122 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:25.122 Temperature Threshold: [2024-07-26 13:39:22.475636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.475641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.475645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.475652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.475665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bbbe0, cid 7, qid 0 00:28:25.122 [2024-07-26 13:39:22.475941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.475948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.475951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.475955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bbbe0) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.475984] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:25.122 [2024-07-26 13:39:22.475995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.122 [2024-07-26 13:39:22.476002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.122 [2024-07-26 13:39:22.476008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.122 [2024-07-26 13:39:22.476014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.122 [2024-07-26 13:39:22.476022] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.476036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.476048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:25.122 [2024-07-26 13:39:22.476291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.476298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.476301] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb660) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.476312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.476326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.476340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:25.122 [2024-07-26 13:39:22.476607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.476616] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.476619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb660) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.476628] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:25.122 [2024-07-26 13:39:22.476633] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:25.122 [2024-07-26 13:39:22.476642] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.476656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.476667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:25.122 [2024-07-26 13:39:22.476920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.476926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.476929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb660) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.476944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.476951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.476958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.476968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:25.122 [2024-07-26 13:39:22.481208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.481219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.481223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.481227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb660) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.481238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.481242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.481246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2252470) 00:28:25.122 [2024-07-26 13:39:22.481252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.122 [2024-07-26 13:39:22.481265] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22bb660, cid 3, qid 0 00:28:25.122 [2024-07-26 13:39:22.481488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:25.122 [2024-07-26 13:39:22.481494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:25.122 [2024-07-26 13:39:22.481498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:25.122 [2024-07-26 13:39:22.481501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22bb660) on tqpair=0x2252470 00:28:25.122 [2024-07-26 13:39:22.481509] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:25.123 0 Kelvin (-273 Celsius) 00:28:25.123 Available Spare: 0% 00:28:25.123 Available Spare Threshold: 0% 00:28:25.123 Life Percentage Used: 0% 00:28:25.123 Data Units Read: 0 00:28:25.123 Data Units Written: 0 00:28:25.123 Host Read Commands: 0 00:28:25.123 Host Write Commands: 0 00:28:25.123 Controller Busy Time: 0 minutes 00:28:25.123 Power Cycles: 0 00:28:25.123 Power On Hours: 0 hours 00:28:25.123 Unsafe Shutdowns: 0 00:28:25.123 Unrecoverable Media Errors: 0 00:28:25.123 Lifetime Error Log Entries: 0 00:28:25.123 Warning Temperature Time: 0 minutes 00:28:25.123 Critical Temperature Time: 0 minutes 00:28:25.123 00:28:25.123 Number of Queues 00:28:25.123 ================ 00:28:25.123 Number of I/O Submission Queues: 127 00:28:25.123 Number of I/O Completion Queues: 127 00:28:25.123 00:28:25.123 Active Namespaces 00:28:25.123 ================= 00:28:25.123 Namespace ID:1 00:28:25.123 Error Recovery Timeout: Unlimited 00:28:25.123 Command Set Identifier: NVM (00h) 00:28:25.123 Deallocate: Supported 00:28:25.123 Deallocated/Unwritten Error: Not Supported 00:28:25.123 Deallocated Read Value: Unknown 00:28:25.123 Deallocate in Write Zeroes: Not Supported 00:28:25.123 Deallocated Guard Field: 0xFFFF 00:28:25.123 Flush: Supported 00:28:25.123 Reservation: Supported 00:28:25.123 Namespace Sharing Capabilities: Multiple Controllers 00:28:25.123 Size (in LBAs): 131072 (0GiB) 00:28:25.123 Capacity (in LBAs): 131072 (0GiB) 00:28:25.123 Utilization (in LBAs): 131072 (0GiB) 00:28:25.123 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:25.123 EUI64: ABCDEF0123456789 00:28:25.123 UUID: f821392e-4181-4a40-aef5-9c43cb5d5e38 00:28:25.123 Thin Provisioning: Not Supported 00:28:25.123 Per-NS Atomic Units: Yes 00:28:25.123 Atomic Boundary Size (Normal): 0 00:28:25.123 Atomic Boundary Size (PFail): 0 00:28:25.123 Atomic Boundary Offset: 0 00:28:25.123 Maximum Single Source Range Length: 65535 00:28:25.123 Maximum Copy Length: 65535 00:28:25.123 Maximum Source Range Count: 1 00:28:25.123 NGUID/EUI64 Never Reused: No 00:28:25.123 Namespace Write Protected: No 00:28:25.123 Number of LBA Formats: 1 00:28:25.123 Current LBA Format: LBA Format #00 00:28:25.123 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:25.123 00:28:25.123 13:39:22 -- host/identify.sh@51 -- # sync 00:28:25.123 13:39:22 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.123 13:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:25.123 13:39:22 -- common/autotest_common.sh@10 -- # set +x 00:28:25.123 13:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:25.123 13:39:22 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:25.123 13:39:22 -- host/identify.sh@56 -- # nvmftestfini 00:28:25.123 13:39:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:25.123 13:39:22 -- nvmf/common.sh@116 -- # sync 00:28:25.123 13:39:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:25.123 13:39:22 -- nvmf/common.sh@119 -- # set +e 00:28:25.123 13:39:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:25.123 13:39:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:25.123 rmmod nvme_tcp 00:28:25.123 rmmod nvme_fabrics 00:28:25.123 rmmod nvme_keyring 00:28:25.123 13:39:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:25.123 13:39:22 -- nvmf/common.sh@123 -- # set -e 00:28:25.123 13:39:22 -- nvmf/common.sh@124 -- # return 0 00:28:25.123 13:39:22 -- nvmf/common.sh@477 -- # '[' -n 1113321 ']' 00:28:25.123 13:39:22 -- nvmf/common.sh@478 -- # killprocess 1113321 00:28:25.123 13:39:22 -- common/autotest_common.sh@926 -- # '[' -z 1113321 ']' 00:28:25.123 13:39:22 -- common/autotest_common.sh@930 -- # kill -0 1113321 00:28:25.123 13:39:22 -- common/autotest_common.sh@931 -- # uname 00:28:25.123 13:39:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:25.123 13:39:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1113321 00:28:25.384 13:39:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:25.384 13:39:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:25.384 13:39:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1113321' 00:28:25.384 killing process with pid 1113321 00:28:25.384 13:39:22 -- common/autotest_common.sh@945 -- # kill 1113321 00:28:25.384 [2024-07-26 13:39:22.637216] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:25.385 13:39:22 -- common/autotest_common.sh@950 -- # wait 1113321 00:28:25.385 13:39:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:25.385 13:39:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:25.385 13:39:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:25.385 13:39:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.385 13:39:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:25.385 13:39:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.385 13:39:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.385 13:39:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.933 13:39:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:27.933 00:28:27.933 real 0m11.073s 00:28:27.933 user 0m8.220s 00:28:27.933 sys 0m5.748s 00:28:27.933 13:39:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.933 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 ************************************ 00:28:27.933 END TEST nvmf_identify 00:28:27.933 ************************************ 00:28:27.933 13:39:24 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:27.933 13:39:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.933 13:39:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.933 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 ************************************ 00:28:27.933 START TEST nvmf_perf 00:28:27.933 ************************************ 00:28:27.934 13:39:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:27.934 * Looking for test storage... 00:28:27.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.934 13:39:24 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.934 13:39:24 -- nvmf/common.sh@7 -- # uname -s 00:28:27.934 13:39:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.934 13:39:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.934 13:39:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.934 13:39:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.934 13:39:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.934 13:39:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.934 13:39:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.934 13:39:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.934 13:39:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.934 13:39:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.934 13:39:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.934 13:39:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.934 13:39:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.934 13:39:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.934 13:39:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.934 13:39:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.934 13:39:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.934 13:39:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.934 13:39:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.934 13:39:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.934 13:39:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.934 13:39:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.934 13:39:25 -- paths/export.sh@5 -- # export PATH 00:28:27.934 13:39:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.934 13:39:25 -- nvmf/common.sh@46 -- # : 0 00:28:27.934 13:39:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.934 13:39:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.934 13:39:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.934 13:39:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.934 13:39:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.934 13:39:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.934 13:39:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.934 13:39:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.934 13:39:25 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:27.934 13:39:25 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:27.934 13:39:25 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:27.934 13:39:25 -- host/perf.sh@17 -- # nvmftestinit 00:28:27.934 13:39:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:27.934 13:39:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.934 13:39:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:27.934 13:39:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:27.934 13:39:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:27.934 13:39:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.934 13:39:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.934 13:39:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.934 13:39:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:27.934 13:39:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:27.934 13:39:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:27.934 13:39:25 -- common/autotest_common.sh@10 -- # set +x 00:28:34.529 13:39:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:34.529 13:39:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:34.529 13:39:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:34.529 13:39:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:34.529 13:39:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:34.529 13:39:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:34.529 13:39:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:34.529 13:39:31 -- nvmf/common.sh@294 -- # net_devs=() 00:28:34.529 13:39:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:34.529 13:39:31 -- nvmf/common.sh@295 -- # e810=() 00:28:34.529 13:39:31 -- nvmf/common.sh@295 -- # local -ga e810 00:28:34.529 13:39:31 -- nvmf/common.sh@296 -- # x722=() 00:28:34.529 13:39:31 -- nvmf/common.sh@296 -- # local -ga x722 00:28:34.529 13:39:31 -- nvmf/common.sh@297 -- # mlx=() 00:28:34.529 13:39:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:34.529 13:39:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.529 13:39:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:34.529 13:39:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:34.529 13:39:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:34.529 13:39:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:34.529 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:34.529 13:39:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:34.529 13:39:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:34.529 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:34.529 13:39:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:34.529 13:39:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.529 13:39:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.529 13:39:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:34.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:34.529 13:39:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.529 13:39:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:34.529 13:39:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.529 13:39:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.529 13:39:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:34.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:34.529 13:39:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.529 13:39:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:34.529 13:39:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:34.529 13:39:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:34.529 13:39:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.529 13:39:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.529 13:39:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.529 13:39:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:34.529 13:39:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.529 13:39:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.529 13:39:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:34.529 13:39:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.529 13:39:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.529 13:39:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:34.529 13:39:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:34.529 13:39:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.529 13:39:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.529 13:39:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.529 13:39:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.529 13:39:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:34.529 13:39:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.791 13:39:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.791 13:39:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.791 13:39:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:34.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:28:34.791 00:28:34.791 --- 10.0.0.2 ping statistics --- 00:28:34.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.791 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:28:34.791 13:39:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:28:34.791 00:28:34.791 --- 10.0.0.1 ping statistics --- 00:28:34.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.791 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:28:34.791 13:39:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.791 13:39:32 -- nvmf/common.sh@410 -- # return 0 00:28:34.791 13:39:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:34.791 13:39:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.791 13:39:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:34.791 13:39:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:34.791 13:39:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.791 13:39:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:34.791 13:39:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:34.791 13:39:32 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:34.791 13:39:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:34.791 13:39:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:34.791 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.791 13:39:32 -- nvmf/common.sh@469 -- # nvmfpid=1117796 00:28:34.791 13:39:32 -- nvmf/common.sh@470 -- # waitforlisten 1117796 00:28:34.791 13:39:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:34.791 13:39:32 -- common/autotest_common.sh@819 -- # '[' -z 1117796 ']' 00:28:34.791 13:39:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.791 13:39:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:34.791 13:39:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.791 13:39:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:34.791 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:28:34.791 [2024-07-26 13:39:32.217065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:34.791 [2024-07-26 13:39:32.217132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.791 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.052 [2024-07-26 13:39:32.289140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.052 [2024-07-26 13:39:32.327604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.052 [2024-07-26 13:39:32.327754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.052 [2024-07-26 13:39:32.327766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.052 [2024-07-26 13:39:32.327775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.052 [2024-07-26 13:39:32.327929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.052 [2024-07-26 13:39:32.328047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.052 [2024-07-26 13:39:32.328212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.052 [2024-07-26 13:39:32.328227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.623 13:39:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:35.623 13:39:32 -- common/autotest_common.sh@852 -- # return 0 00:28:35.623 13:39:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:35.623 13:39:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:35.623 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:28:35.623 13:39:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.623 13:39:33 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:35.623 13:39:33 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:36.195 13:39:33 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:36.195 13:39:33 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:36.455 13:39:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:36.455 13:39:33 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:36.455 13:39:33 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:36.455 13:39:33 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:36.455 13:39:33 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:36.455 13:39:33 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:36.455 13:39:33 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:36.716 [2024-07-26 13:39:33.996583] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.716 13:39:34 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.978 13:39:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:36.978 13:39:34 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.978 13:39:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:36.978 13:39:34 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:37.239 13:39:34 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.239 [2024-07-26 13:39:34.659187] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.239 13:39:34 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:37.501 13:39:34 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:28:37.501 13:39:34 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:37.501 13:39:34 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:37.501 13:39:34 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:28:38.887 Initializing NVMe Controllers 00:28:38.887 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:28:38.887 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:28:38.887 Initialization complete. Launching workers. 00:28:38.887 ======================================================== 00:28:38.887 Latency(us) 00:28:38.887 Device Information : IOPS MiB/s Average min max 00:28:38.887 PCIE (0000:65:00.0) NSID 1 from core 0: 80985.42 316.35 394.34 13.10 4628.15 00:28:38.887 ======================================================== 00:28:38.887 Total : 80985.42 316.35 394.34 13.10 4628.15 00:28:38.887 00:28:38.887 13:39:36 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:38.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.269 Initializing NVMe Controllers 00:28:40.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:40.269 Initialization complete. Launching workers. 00:28:40.269 ======================================================== 00:28:40.269 Latency(us) 00:28:40.269 Device Information : IOPS MiB/s Average min max 00:28:40.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.94 0.36 11223.26 413.23 44555.59 00:28:40.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.96 0.23 17075.97 7955.74 51879.11 00:28:40.269 ======================================================== 00:28:40.269 Total : 151.90 0.59 13533.54 413.23 51879.11 00:28:40.269 00:28:40.269 13:39:37 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.269 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.270 Initializing NVMe Controllers 00:28:41.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.270 Initialization complete. Launching workers. 00:28:41.270 ======================================================== 00:28:41.270 Latency(us) 00:28:41.270 Device Information : IOPS MiB/s Average min max 00:28:41.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7837.99 30.62 4095.72 704.75 8530.73 00:28:41.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3917.00 15.30 8208.15 6781.71 16079.61 00:28:41.270 ======================================================== 00:28:41.270 Total : 11754.99 45.92 5466.07 704.75 16079.61 00:28:41.270 00:28:41.270 13:39:38 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:41.270 13:39:38 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:41.270 13:39:38 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.270 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.813 Initializing NVMe Controllers 00:28:43.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.813 Controller IO queue size 128, less than required. 00:28:43.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.813 Controller IO queue size 128, less than required. 00:28:43.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:43.813 Initialization complete. Launching workers. 00:28:43.813 ======================================================== 00:28:43.813 Latency(us) 00:28:43.813 Device Information : IOPS MiB/s Average min max 00:28:43.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 818.88 204.72 162715.91 95445.72 298546.99 00:28:43.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.07 140.27 245966.75 63331.21 398713.50 00:28:43.813 ======================================================== 00:28:43.813 Total : 1379.95 344.99 196564.82 63331.21 398713.50 00:28:43.813 00:28:43.813 13:39:41 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:43.813 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.072 No valid NVMe controllers or AIO or URING devices found 00:28:44.072 Initializing NVMe Controllers 00:28:44.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.072 Controller IO queue size 128, less than required. 00:28:44.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.072 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:44.072 Controller IO queue size 128, less than required. 00:28:44.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.072 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:44.072 WARNING: Some requested NVMe devices were skipped 00:28:44.072 13:39:41 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:44.072 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.609 Initializing NVMe Controllers 00:28:46.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.609 Controller IO queue size 128, less than required. 00:28:46.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.609 Controller IO queue size 128, less than required. 00:28:46.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:46.609 Initialization complete. Launching workers. 00:28:46.609 00:28:46.609 ==================== 00:28:46.609 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:46.609 TCP transport: 00:28:46.609 polls: 43903 00:28:46.609 idle_polls: 15562 00:28:46.609 sock_completions: 28341 00:28:46.609 nvme_completions: 3430 00:28:46.609 submitted_requests: 5230 00:28:46.609 queued_requests: 1 00:28:46.609 00:28:46.609 ==================== 00:28:46.609 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:46.609 TCP transport: 00:28:46.609 polls: 44063 00:28:46.609 idle_polls: 15375 00:28:46.609 sock_completions: 28688 00:28:46.609 nvme_completions: 3568 00:28:46.609 submitted_requests: 5490 00:28:46.609 queued_requests: 1 00:28:46.609 ======================================================== 00:28:46.609 Latency(us) 00:28:46.609 Device Information : IOPS MiB/s Average min max 00:28:46.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 921.00 230.25 142523.18 63261.29 272782.49 00:28:46.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 955.00 238.75 136590.86 68808.70 198353.71 00:28:46.609 ======================================================== 00:28:46.610 Total : 1875.99 469.00 139503.26 63261.29 272782.49 00:28:46.610 00:28:46.610 13:39:43 -- host/perf.sh@66 -- # sync 00:28:46.610 13:39:43 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.610 13:39:44 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:46.610 13:39:44 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:28:46.610 13:39:44 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:47.995 13:39:45 -- host/perf.sh@72 -- # ls_guid=ea25cdcc-293c-428b-ad0b-530822df6de1 00:28:47.995 13:39:45 -- host/perf.sh@73 -- # get_lvs_free_mb ea25cdcc-293c-428b-ad0b-530822df6de1 00:28:47.995 13:39:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ea25cdcc-293c-428b-ad0b-530822df6de1 00:28:47.995 13:39:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:47.995 13:39:45 -- common/autotest_common.sh@1345 -- # local fc 00:28:47.995 13:39:45 -- common/autotest_common.sh@1346 -- # local cs 00:28:47.995 13:39:45 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:47.995 13:39:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:47.995 { 00:28:47.995 "uuid": "ea25cdcc-293c-428b-ad0b-530822df6de1", 00:28:47.995 "name": "lvs_0", 00:28:47.995 "base_bdev": "Nvme0n1", 00:28:47.995 "total_data_clusters": 457407, 00:28:47.995 "free_clusters": 457407, 00:28:47.995 "block_size": 512, 00:28:47.995 "cluster_size": 4194304 00:28:47.995 } 00:28:47.995 ]' 00:28:47.995 13:39:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ea25cdcc-293c-428b-ad0b-530822df6de1") .free_clusters' 00:28:47.995 13:39:45 -- common/autotest_common.sh@1348 -- # fc=457407 00:28:47.995 13:39:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ea25cdcc-293c-428b-ad0b-530822df6de1") .cluster_size' 00:28:47.995 13:39:45 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:47.995 13:39:45 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:28:47.995 13:39:45 -- common/autotest_common.sh@1353 -- # echo 1829628 00:28:47.995 1829628 00:28:47.995 13:39:45 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:28:47.995 13:39:45 -- host/perf.sh@78 -- # free_mb=20480 00:28:47.995 13:39:45 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea25cdcc-293c-428b-ad0b-530822df6de1 lbd_0 20480 00:28:48.255 13:39:45 -- host/perf.sh@80 -- # lb_guid=829c806f-9746-4fb5-b124-5844241cf3c1 00:28:48.255 13:39:45 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 829c806f-9746-4fb5-b124-5844241cf3c1 lvs_n_0 00:28:50.170 13:39:47 -- host/perf.sh@83 -- # ls_nested_guid=d0c78e23-fa4a-4882-beed-d513c4e1f27c 00:28:50.170 13:39:47 -- host/perf.sh@84 -- # get_lvs_free_mb d0c78e23-fa4a-4882-beed-d513c4e1f27c 00:28:50.170 13:39:47 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d0c78e23-fa4a-4882-beed-d513c4e1f27c 00:28:50.170 13:39:47 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:50.170 13:39:47 -- common/autotest_common.sh@1345 -- # local fc 00:28:50.170 13:39:47 -- common/autotest_common.sh@1346 -- # local cs 00:28:50.170 13:39:47 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:50.170 13:39:47 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:50.170 { 00:28:50.170 "uuid": "ea25cdcc-293c-428b-ad0b-530822df6de1", 00:28:50.170 "name": "lvs_0", 00:28:50.170 "base_bdev": "Nvme0n1", 00:28:50.170 "total_data_clusters": 457407, 00:28:50.170 "free_clusters": 452287, 00:28:50.170 "block_size": 512, 00:28:50.170 "cluster_size": 4194304 00:28:50.170 }, 00:28:50.170 { 00:28:50.170 "uuid": "d0c78e23-fa4a-4882-beed-d513c4e1f27c", 00:28:50.170 "name": "lvs_n_0", 00:28:50.170 "base_bdev": "829c806f-9746-4fb5-b124-5844241cf3c1", 00:28:50.170 "total_data_clusters": 5114, 00:28:50.170 "free_clusters": 5114, 00:28:50.170 "block_size": 512, 00:28:50.170 "cluster_size": 4194304 00:28:50.170 } 00:28:50.170 ]' 00:28:50.170 13:39:47 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d0c78e23-fa4a-4882-beed-d513c4e1f27c") .free_clusters' 00:28:50.170 13:39:47 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:50.170 13:39:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d0c78e23-fa4a-4882-beed-d513c4e1f27c") .cluster_size' 00:28:50.170 13:39:47 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:50.170 13:39:47 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:50.170 13:39:47 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:50.170 20456 00:28:50.170 13:39:47 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:50.170 13:39:47 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0c78e23-fa4a-4882-beed-d513c4e1f27c lbd_nest_0 20456 00:28:50.432 13:39:47 -- host/perf.sh@88 -- # lb_nested_guid=948c0f89-4abc-40e1-bae7-6bea189a5154 00:28:50.432 13:39:47 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.432 13:39:47 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:50.432 13:39:47 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 948c0f89-4abc-40e1-bae7-6bea189a5154 00:28:50.693 13:39:47 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.693 13:39:48 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:50.693 13:39:48 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:50.693 13:39:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:50.693 13:39:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:50.693 13:39:48 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.953 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.187 Initializing NVMe Controllers 00:29:03.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.187 Initialization complete. Launching workers. 00:29:03.187 ======================================================== 00:29:03.187 Latency(us) 00:29:03.187 Device Information : IOPS MiB/s Average min max 00:29:03.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.40 0.02 22079.97 283.18 48405.74 00:29:03.187 ======================================================== 00:29:03.187 Total : 45.40 0.02 22079.97 283.18 48405.74 00:29:03.187 00:29:03.187 13:39:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:03.187 13:39:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.187 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.189 Initializing NVMe Controllers 00:29:13.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.189 Initialization complete. Launching workers. 00:29:13.189 ======================================================== 00:29:13.189 Latency(us) 00:29:13.189 Device Information : IOPS MiB/s Average min max 00:29:13.189 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.00 10.12 12350.88 4986.04 47887.69 00:29:13.189 ======================================================== 00:29:13.189 Total : 81.00 10.12 12350.88 4986.04 47887.69 00:29:13.189 00:29:13.189 13:40:08 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:13.189 13:40:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:13.189 13:40:08 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.189 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.227 Initializing NVMe Controllers 00:29:23.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.227 Initialization complete. Launching workers. 00:29:23.227 ======================================================== 00:29:23.227 Latency(us) 00:29:23.227 Device Information : IOPS MiB/s Average min max 00:29:23.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8429.57 4.12 3801.57 454.56 43355.75 00:29:23.227 ======================================================== 00:29:23.227 Total : 8429.57 4.12 3801.57 454.56 43355.75 00:29:23.227 00:29:23.227 13:40:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.227 13:40:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.227 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.236 Initializing NVMe Controllers 00:29:33.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.236 Initialization complete. Launching workers. 00:29:33.236 ======================================================== 00:29:33.236 Latency(us) 00:29:33.236 Device Information : IOPS MiB/s Average min max 00:29:33.236 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1621.09 202.64 19786.02 1186.36 65817.22 00:29:33.236 ======================================================== 00:29:33.236 Total : 1621.09 202.64 19786.02 1186.36 65817.22 00:29:33.236 00:29:33.236 13:40:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:33.236 13:40:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:33.236 13:40:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.236 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.244 Initializing NVMe Controllers 00:29:43.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.244 Controller IO queue size 128, less than required. 00:29:43.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.244 Initialization complete. Launching workers. 00:29:43.244 ======================================================== 00:29:43.244 Latency(us) 00:29:43.244 Device Information : IOPS MiB/s Average min max 00:29:43.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15905.94 7.77 8047.30 1950.95 19213.09 00:29:43.244 ======================================================== 00:29:43.244 Total : 15905.94 7.77 8047.30 1950.95 19213.09 00:29:43.244 00:29:43.244 13:40:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:43.244 13:40:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.244 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.244 Initializing NVMe Controllers 00:29:53.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.244 Controller IO queue size 128, less than required. 00:29:53.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.244 Initialization complete. Launching workers. 00:29:53.244 ======================================================== 00:29:53.244 Latency(us) 00:29:53.244 Device Information : IOPS MiB/s Average min max 00:29:53.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1126.21 140.78 114051.05 15906.88 265832.63 00:29:53.244 ======================================================== 00:29:53.244 Total : 1126.21 140.78 114051.05 15906.88 265832.63 00:29:53.244 00:29:53.244 13:40:50 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.244 13:40:50 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 948c0f89-4abc-40e1-bae7-6bea189a5154 00:29:55.157 13:40:52 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:55.157 13:40:52 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 829c806f-9746-4fb5-b124-5844241cf3c1 00:29:55.157 13:40:52 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:55.418 13:40:52 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:55.418 13:40:52 -- host/perf.sh@114 -- # nvmftestfini 00:29:55.418 13:40:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:55.418 13:40:52 -- nvmf/common.sh@116 -- # sync 00:29:55.418 13:40:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:55.418 13:40:52 -- nvmf/common.sh@119 -- # set +e 00:29:55.418 13:40:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:55.418 13:40:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:55.418 rmmod nvme_tcp 00:29:55.418 rmmod nvme_fabrics 00:29:55.418 rmmod nvme_keyring 00:29:55.418 13:40:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:55.418 13:40:52 -- nvmf/common.sh@123 -- # set -e 00:29:55.418 13:40:52 -- nvmf/common.sh@124 -- # return 0 00:29:55.418 13:40:52 -- nvmf/common.sh@477 -- # '[' -n 1117796 ']' 00:29:55.418 13:40:52 -- nvmf/common.sh@478 -- # killprocess 1117796 00:29:55.418 13:40:52 -- common/autotest_common.sh@926 -- # '[' -z 1117796 ']' 00:29:55.418 13:40:52 -- common/autotest_common.sh@930 -- # kill -0 1117796 00:29:55.418 13:40:52 -- common/autotest_common.sh@931 -- # uname 00:29:55.418 13:40:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.418 13:40:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1117796 00:29:55.688 13:40:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:55.688 13:40:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:55.688 13:40:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1117796' 00:29:55.688 killing process with pid 1117796 00:29:55.688 13:40:52 -- common/autotest_common.sh@945 -- # kill 1117796 00:29:55.688 13:40:52 -- common/autotest_common.sh@950 -- # wait 1117796 00:29:57.637 13:40:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:57.637 13:40:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:57.637 13:40:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:57.637 13:40:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.637 13:40:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:57.637 13:40:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.637 13:40:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.637 13:40:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.553 13:40:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:59.553 00:29:59.553 real 1m32.047s 00:29:59.553 user 5m26.826s 00:29:59.553 sys 0m13.141s 00:29:59.553 13:40:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.553 13:40:56 -- common/autotest_common.sh@10 -- # set +x 00:29:59.553 ************************************ 00:29:59.553 END TEST nvmf_perf 00:29:59.553 ************************************ 00:29:59.553 13:40:56 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:59.553 13:40:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:59.553 13:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.553 13:40:56 -- common/autotest_common.sh@10 -- # set +x 00:29:59.553 ************************************ 00:29:59.553 START TEST nvmf_fio_host 00:29:59.553 ************************************ 00:29:59.553 13:40:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:59.815 * Looking for test storage... 00:29:59.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.815 13:40:57 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.815 13:40:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.815 13:40:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.815 13:40:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.815 13:40:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@5 -- # export PATH 00:29:59.815 13:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.815 13:40:57 -- nvmf/common.sh@7 -- # uname -s 00:29:59.815 13:40:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.815 13:40:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.815 13:40:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.815 13:40:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.815 13:40:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.815 13:40:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.815 13:40:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.815 13:40:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.815 13:40:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.815 13:40:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.815 13:40:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.815 13:40:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.815 13:40:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.815 13:40:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.815 13:40:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.815 13:40:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.815 13:40:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.815 13:40:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.815 13:40:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.815 13:40:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- paths/export.sh@5 -- # export PATH 00:29:59.815 13:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.815 13:40:57 -- nvmf/common.sh@46 -- # : 0 00:29:59.815 13:40:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:59.815 13:40:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:59.815 13:40:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:59.815 13:40:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.815 13:40:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.815 13:40:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:59.815 13:40:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:59.816 13:40:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:59.816 13:40:57 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:59.816 13:40:57 -- host/fio.sh@14 -- # nvmftestinit 00:29:59.816 13:40:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:59.816 13:40:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.816 13:40:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:59.816 13:40:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:59.816 13:40:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:59.816 13:40:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.816 13:40:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.816 13:40:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.816 13:40:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:59.816 13:40:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:59.816 13:40:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:59.816 13:40:57 -- common/autotest_common.sh@10 -- # set +x 00:30:07.960 13:41:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:07.960 13:41:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:07.960 13:41:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:07.960 13:41:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:07.960 13:41:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:07.960 13:41:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:07.960 13:41:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:07.960 13:41:04 -- nvmf/common.sh@294 -- # net_devs=() 00:30:07.960 13:41:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:07.960 13:41:04 -- nvmf/common.sh@295 -- # e810=() 00:30:07.960 13:41:04 -- nvmf/common.sh@295 -- # local -ga e810 00:30:07.960 13:41:04 -- nvmf/common.sh@296 -- # x722=() 00:30:07.960 13:41:04 -- nvmf/common.sh@296 -- # local -ga x722 00:30:07.960 13:41:04 -- nvmf/common.sh@297 -- # mlx=() 00:30:07.960 13:41:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:07.960 13:41:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.960 13:41:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:07.960 13:41:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:07.960 13:41:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:07.960 13:41:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:07.960 13:41:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.960 13:41:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.960 13:41:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:07.961 13:41:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.961 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.961 13:41:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:07.961 13:41:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:07.961 13:41:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.961 13:41:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:07.961 13:41:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.961 13:41:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.961 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.961 13:41:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.961 13:41:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:07.961 13:41:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.961 13:41:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:07.961 13:41:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.961 13:41:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.961 13:41:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.961 13:41:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:07.961 13:41:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:07.961 13:41:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:07.961 13:41:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.961 13:41:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.961 13:41:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.961 13:41:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:07.961 13:41:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.961 13:41:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.961 13:41:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:07.961 13:41:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.961 13:41:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.961 13:41:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:07.961 13:41:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:07.961 13:41:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.961 13:41:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.961 13:41:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.961 13:41:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.961 13:41:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:07.961 13:41:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.961 13:41:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.961 13:41:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.961 13:41:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:07.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.744 ms 00:30:07.961 00:30:07.961 --- 10.0.0.2 ping statistics --- 00:30:07.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.961 rtt min/avg/max/mdev = 0.744/0.744/0.744/0.000 ms 00:30:07.961 13:41:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:30:07.961 00:30:07.961 --- 10.0.0.1 ping statistics --- 00:30:07.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.961 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:30:07.961 13:41:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.961 13:41:04 -- nvmf/common.sh@410 -- # return 0 00:30:07.961 13:41:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:07.961 13:41:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.961 13:41:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:07.961 13:41:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.961 13:41:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:07.961 13:41:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:07.961 13:41:04 -- host/fio.sh@16 -- # [[ y != y ]] 00:30:07.961 13:41:04 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:07.961 13:41:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:07.961 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:07.961 13:41:04 -- host/fio.sh@24 -- # nvmfpid=1137888 00:30:07.961 13:41:04 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.961 13:41:04 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:07.961 13:41:04 -- host/fio.sh@28 -- # waitforlisten 1137888 00:30:07.961 13:41:04 -- common/autotest_common.sh@819 -- # '[' -z 1137888 ']' 00:30:07.961 13:41:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.961 13:41:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:07.961 13:41:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.961 13:41:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:07.961 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:07.961 [2024-07-26 13:41:04.460188] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:07.961 [2024-07-26 13:41:04.460274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.961 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.961 [2024-07-26 13:41:04.531787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.961 [2024-07-26 13:41:04.569572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:07.961 [2024-07-26 13:41:04.569718] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.961 [2024-07-26 13:41:04.569727] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.961 [2024-07-26 13:41:04.569734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.961 [2024-07-26 13:41:04.569887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.961 [2024-07-26 13:41:04.570010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.961 [2024-07-26 13:41:04.570172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.961 [2024-07-26 13:41:04.570173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.961 13:41:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:07.961 13:41:05 -- common/autotest_common.sh@852 -- # return 0 00:30:07.961 13:41:05 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.961 [2024-07-26 13:41:05.367776] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.961 13:41:05 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:07.961 13:41:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:07.961 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:30:08.222 13:41:05 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:08.222 Malloc1 00:30:08.222 13:41:05 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.482 13:41:05 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:08.482 13:41:05 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.743 [2024-07-26 13:41:06.077543] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.743 13:41:06 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.004 13:41:06 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:09.004 13:41:06 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:09.004 13:41:06 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:09.004 13:41:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:09.004 13:41:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.004 13:41:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:09.004 13:41:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.004 13:41:06 -- common/autotest_common.sh@1320 -- # shift 00:30:09.004 13:41:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:09.004 13:41:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:09.004 13:41:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:09.004 13:41:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:09.004 13:41:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:09.004 13:41:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:09.004 13:41:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:09.004 13:41:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:09.265 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:09.265 fio-3.35 00:30:09.265 Starting 1 thread 00:30:09.265 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.815 00:30:11.815 test: (groupid=0, jobs=1): err= 0: pid=1138603: Fri Jul 26 13:41:09 2024 00:30:11.815 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2004msec) 00:30:11.815 slat (usec): min=2, max=277, avg= 2.16, stdev= 2.27 00:30:11.815 clat (usec): min=3114, max=10058, avg=4972.71, stdev=702.01 00:30:11.815 lat (usec): min=3116, max=10072, avg=4974.87, stdev=702.20 00:30:11.815 clat percentiles (usec): 00:30:11.815 | 1.00th=[ 3752], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4490], 00:30:11.815 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 5014], 00:30:11.815 | 70.00th=[ 5145], 80.00th=[ 5407], 90.00th=[ 5800], 95.00th=[ 6325], 00:30:11.815 | 99.00th=[ 7504], 99.50th=[ 8029], 99.90th=[ 9110], 99.95th=[ 9372], 00:30:11.815 | 99.99th=[10028] 00:30:11.815 bw ( KiB/s): min=57424, max=59776, per=99.96%, avg=58738.00, stdev=979.28, samples=4 00:30:11.815 iops : min=14356, max=14944, avg=14684.50, stdev=244.82, samples=4 00:30:11.815 write: IOPS=14.7k, BW=57.5MiB/s (60.2MB/s)(115MiB/2004msec); 0 zone resets 00:30:11.815 slat (usec): min=2, max=265, avg= 2.24, stdev= 1.70 00:30:11.815 clat (usec): min=1870, max=8698, avg=3698.27, stdev=492.68 00:30:11.815 lat (usec): min=1872, max=8730, avg=3700.51, stdev=492.95 00:30:11.815 clat percentiles (usec): 00:30:11.815 | 1.00th=[ 2507], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3359], 00:30:11.815 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3818], 00:30:11.815 | 70.00th=[ 3916], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4359], 00:30:11.815 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 7308], 99.95th=[ 7898], 00:30:11.815 | 99.99th=[ 8586] 00:30:11.815 bw ( KiB/s): min=57880, max=59472, per=100.00%, avg=58842.00, stdev=677.64, samples=4 00:30:11.815 iops : min=14470, max=14868, avg=14710.50, stdev=169.41, samples=4 00:30:11.815 lat (msec) : 2=0.01%, 4=40.43%, 10=59.54%, 20=0.01% 00:30:11.815 cpu : usr=68.25%, sys=23.51%, ctx=19, majf=0, minf=5 00:30:11.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:11.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.815 issued rwts: total=29439,29478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.815 00:30:11.815 Run status group 0 (all jobs): 00:30:11.815 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2004-2004msec 00:30:11.815 WRITE: bw=57.5MiB/s (60.2MB/s), 57.5MiB/s-57.5MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2004-2004msec 00:30:11.815 13:41:09 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:11.815 13:41:09 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:11.815 13:41:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:11.815 13:41:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.815 13:41:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:11.815 13:41:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.815 13:41:09 -- common/autotest_common.sh@1320 -- # shift 00:30:11.815 13:41:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:11.815 13:41:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:11.815 13:41:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:11.815 13:41:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:11.815 13:41:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:11.815 13:41:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:11.815 13:41:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:11.815 13:41:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:12.075 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:12.075 fio-3.35 00:30:12.075 Starting 1 thread 00:30:12.335 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.883 00:30:14.883 test: (groupid=0, jobs=1): err= 0: pid=1139258: Fri Jul 26 13:41:11 2024 00:30:14.883 read: IOPS=8416, BW=132MiB/s (138MB/s)(264MiB/2006msec) 00:30:14.883 slat (usec): min=3, max=112, avg= 3.64, stdev= 1.63 00:30:14.883 clat (usec): min=3509, max=54768, avg=9440.80, stdev=4567.86 00:30:14.883 lat (usec): min=3513, max=54772, avg=9444.44, stdev=4568.09 00:30:14.883 clat percentiles (usec): 00:30:14.883 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6718], 00:30:14.883 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9372], 00:30:14.883 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12780], 95.00th=[13698], 00:30:14.883 | 99.00th=[23725], 99.50th=[50070], 99.90th=[53740], 99.95th=[54264], 00:30:14.883 | 99.99th=[54789] 00:30:14.883 bw ( KiB/s): min=52832, max=85088, per=51.87%, avg=69848.00, stdev=17180.48, samples=4 00:30:14.883 iops : min= 3302, max= 5318, avg=4365.50, stdev=1073.78, samples=4 00:30:14.883 write: IOPS=5242, BW=81.9MiB/s (85.9MB/s)(142MiB/1738msec); 0 zone resets 00:30:14.883 slat (usec): min=39, max=322, avg=41.12, stdev= 7.59 00:30:14.883 clat (usec): min=4471, max=28006, avg=9760.71, stdev=2368.73 00:30:14.883 lat (usec): min=4511, max=28050, avg=9801.83, stdev=2370.16 00:30:14.883 clat percentiles (usec): 00:30:14.883 | 1.00th=[ 6521], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8094], 00:30:14.883 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:30:14.883 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11863], 95.00th=[12911], 00:30:14.883 | 99.00th=[22676], 99.50th=[24249], 99.90th=[24773], 99.95th=[24773], 00:30:14.883 | 99.99th=[27919] 00:30:14.883 bw ( KiB/s): min=54976, max=88480, per=86.59%, avg=72632.00, stdev=17747.92, samples=4 00:30:14.883 iops : min= 3436, max= 5530, avg=4539.50, stdev=1109.25, samples=4 00:30:14.883 lat (msec) : 4=0.07%, 10=66.66%, 20=31.66%, 50=1.27%, 100=0.33% 00:30:14.883 cpu : usr=81.11%, sys=13.86%, ctx=10, majf=0, minf=26 00:30:14.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:14.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.883 issued rwts: total=16884,9111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.883 00:30:14.883 Run status group 0 (all jobs): 00:30:14.883 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=264MiB (277MB), run=2006-2006msec 00:30:14.883 WRITE: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=142MiB (149MB), run=1738-1738msec 00:30:14.883 13:41:11 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.883 13:41:12 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:14.883 13:41:12 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:14.883 13:41:12 -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:14.883 13:41:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:14.883 13:41:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:14.883 13:41:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:14.883 13:41:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:14.883 13:41:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:14.883 13:41:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:14.884 13:41:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:30:14.884 13:41:12 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:30:15.145 Nvme0n1 00:30:15.145 13:41:12 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:15.717 13:41:13 -- host/fio.sh@53 -- # ls_guid=8825753d-5d7c-4906-b141-dc3eb496aa93 00:30:15.717 13:41:13 -- host/fio.sh@54 -- # get_lvs_free_mb 8825753d-5d7c-4906-b141-dc3eb496aa93 00:30:15.717 13:41:13 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8825753d-5d7c-4906-b141-dc3eb496aa93 00:30:15.717 13:41:13 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:15.717 13:41:13 -- common/autotest_common.sh@1345 -- # local fc 00:30:15.717 13:41:13 -- common/autotest_common.sh@1346 -- # local cs 00:30:15.717 13:41:13 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:15.976 13:41:13 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:15.977 { 00:30:15.977 "uuid": "8825753d-5d7c-4906-b141-dc3eb496aa93", 00:30:15.977 "name": "lvs_0", 00:30:15.977 "base_bdev": "Nvme0n1", 00:30:15.977 "total_data_clusters": 1787, 00:30:15.977 "free_clusters": 1787, 00:30:15.977 "block_size": 512, 00:30:15.977 "cluster_size": 1073741824 00:30:15.977 } 00:30:15.977 ]' 00:30:15.977 13:41:13 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8825753d-5d7c-4906-b141-dc3eb496aa93") .free_clusters' 00:30:15.977 13:41:13 -- common/autotest_common.sh@1348 -- # fc=1787 00:30:15.977 13:41:13 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8825753d-5d7c-4906-b141-dc3eb496aa93") .cluster_size' 00:30:15.977 13:41:13 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:15.977 13:41:13 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:30:15.977 13:41:13 -- common/autotest_common.sh@1353 -- # echo 1829888 00:30:15.977 1829888 00:30:15.977 13:41:13 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:30:16.236 665f5f1c-d8c5-4e3c-b7c2-7f0fde7e698d 00:30:16.236 13:41:13 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:16.498 13:41:13 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:16.498 13:41:13 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:16.759 13:41:14 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.759 13:41:14 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:16.759 13:41:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:16.759 13:41:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:16.759 13:41:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:16.759 13:41:14 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.759 13:41:14 -- common/autotest_common.sh@1320 -- # shift 00:30:16.759 13:41:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:16.759 13:41:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:16.759 13:41:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:16.759 13:41:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:16.759 13:41:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:16.759 13:41:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:16.759 13:41:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:16.759 13:41:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.020 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:17.020 fio-3.35 00:30:17.020 Starting 1 thread 00:30:17.020 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.632 00:30:19.632 test: (groupid=0, jobs=1): err= 0: pid=1140463: Fri Jul 26 13:41:16 2024 00:30:19.632 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(83.9MiB/2004msec) 00:30:19.632 slat (nsec): min=2052, max=107749, avg=2192.08, stdev=999.17 00:30:19.632 clat (usec): min=3106, max=15122, avg=6810.80, stdev=1039.85 00:30:19.632 lat (usec): min=3108, max=15124, avg=6812.99, stdev=1039.85 00:30:19.632 clat percentiles (usec): 00:30:19.632 | 1.00th=[ 4948], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 6063], 00:30:19.632 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6849], 00:30:19.632 | 70.00th=[ 7046], 80.00th=[ 7373], 90.00th=[ 7963], 95.00th=[ 8717], 00:30:19.632 | 99.00th=[10421], 99.50th=[11863], 99.90th=[13304], 99.95th=[13960], 00:30:19.632 | 99.99th=[14877] 00:30:19.632 bw ( KiB/s): min=41344, max=43728, per=99.84%, avg=42808.00, stdev=1036.69, samples=4 00:30:19.632 iops : min=10336, max=10932, avg=10702.00, stdev=259.17, samples=4 00:30:19.632 write: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(83.8MiB/2004msec); 0 zone resets 00:30:19.632 slat (nsec): min=2113, max=96491, avg=2290.56, stdev=709.76 00:30:19.632 clat (usec): min=1475, max=9114, avg=5081.10, stdev=689.82 00:30:19.632 lat (usec): min=1481, max=9129, avg=5083.39, stdev=689.85 00:30:19.632 clat percentiles (usec): 00:30:19.632 | 1.00th=[ 3326], 5.00th=[ 3916], 10.00th=[ 4228], 20.00th=[ 4555], 00:30:19.632 | 30.00th=[ 4752], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5276], 00:30:19.632 | 70.00th=[ 5407], 80.00th=[ 5604], 90.00th=[ 5866], 95.00th=[ 6128], 00:30:19.632 | 99.00th=[ 6849], 99.50th=[ 7439], 99.90th=[ 8586], 99.95th=[ 8717], 00:30:19.632 | 99.99th=[ 9110] 00:30:19.632 bw ( KiB/s): min=41856, max=43840, per=99.97%, avg=42782.00, stdev=836.05, samples=4 00:30:19.632 iops : min=10464, max=10960, avg=10695.50, stdev=209.01, samples=4 00:30:19.632 lat (msec) : 2=0.01%, 4=2.92%, 10=96.31%, 20=0.77% 00:30:19.632 cpu : usr=68.65%, sys=24.61%, ctx=17, majf=0, minf=14 00:30:19.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:19.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.632 issued rwts: total=21482,21440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.632 00:30:19.632 Run status group 0 (all jobs): 00:30:19.632 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=83.9MiB (88.0MB), run=2004-2004msec 00:30:19.632 WRITE: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=83.8MiB (87.8MB), run=2004-2004msec 00:30:19.632 13:41:16 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:19.892 13:41:17 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:20.464 13:41:17 -- host/fio.sh@64 -- # ls_nested_guid=65d62a18-0827-46e3-a01b-d895fe44d189 00:30:20.464 13:41:17 -- host/fio.sh@65 -- # get_lvs_free_mb 65d62a18-0827-46e3-a01b-d895fe44d189 00:30:20.464 13:41:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=65d62a18-0827-46e3-a01b-d895fe44d189 00:30:20.464 13:41:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:20.464 13:41:17 -- common/autotest_common.sh@1345 -- # local fc 00:30:20.464 13:41:17 -- common/autotest_common.sh@1346 -- # local cs 00:30:20.464 13:41:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:20.725 13:41:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:20.725 { 00:30:20.725 "uuid": "8825753d-5d7c-4906-b141-dc3eb496aa93", 00:30:20.725 "name": "lvs_0", 00:30:20.725 "base_bdev": "Nvme0n1", 00:30:20.725 "total_data_clusters": 1787, 00:30:20.725 "free_clusters": 0, 00:30:20.725 "block_size": 512, 00:30:20.725 "cluster_size": 1073741824 00:30:20.725 }, 00:30:20.725 { 00:30:20.725 "uuid": "65d62a18-0827-46e3-a01b-d895fe44d189", 00:30:20.725 "name": "lvs_n_0", 00:30:20.725 "base_bdev": "665f5f1c-d8c5-4e3c-b7c2-7f0fde7e698d", 00:30:20.725 "total_data_clusters": 457025, 00:30:20.725 "free_clusters": 457025, 00:30:20.725 "block_size": 512, 00:30:20.725 "cluster_size": 4194304 00:30:20.725 } 00:30:20.725 ]' 00:30:20.725 13:41:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="65d62a18-0827-46e3-a01b-d895fe44d189") .free_clusters' 00:30:20.725 13:41:18 -- common/autotest_common.sh@1348 -- # fc=457025 00:30:20.725 13:41:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="65d62a18-0827-46e3-a01b-d895fe44d189") .cluster_size' 00:30:20.725 13:41:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:20.725 13:41:18 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:30:20.725 13:41:18 -- common/autotest_common.sh@1353 -- # echo 1828100 00:30:20.725 1828100 00:30:20.725 13:41:18 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:30:21.667 c9c88f8a-7a34-4b9f-82c3-146ef7397a8b 00:30:21.927 13:41:19 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:21.927 13:41:19 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:22.188 13:41:19 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:22.188 13:41:19 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.188 13:41:19 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.188 13:41:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:22.188 13:41:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.188 13:41:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:22.188 13:41:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.188 13:41:19 -- common/autotest_common.sh@1320 -- # shift 00:30:22.188 13:41:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:22.188 13:41:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.188 13:41:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.188 13:41:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:22.188 13:41:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:22.502 13:41:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:22.502 13:41:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:22.502 13:41:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.502 13:41:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.502 13:41:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:22.502 13:41:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:22.502 13:41:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:22.503 13:41:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:22.503 13:41:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:22.503 13:41:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.769 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:22.769 fio-3.35 00:30:22.769 Starting 1 thread 00:30:22.769 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.311 00:30:25.311 test: (groupid=0, jobs=1): err= 0: pid=1141659: Fri Jul 26 13:41:22 2024 00:30:25.311 read: IOPS=6687, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2007msec) 00:30:25.311 slat (usec): min=2, max=106, avg= 2.23, stdev= 1.27 00:30:25.311 clat (usec): min=3531, max=17084, avg=10639.12, stdev=1080.21 00:30:25.311 lat (usec): min=3550, max=17086, avg=10641.35, stdev=1080.15 00:30:25.311 clat percentiles (usec): 00:30:25.311 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9896], 00:30:25.311 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:30:25.311 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12387], 00:30:25.311 | 99.00th=[14091], 99.50th=[15008], 99.90th=[15533], 99.95th=[16581], 00:30:25.311 | 99.99th=[17171] 00:30:25.311 bw ( KiB/s): min=25272, max=27536, per=99.80%, avg=26696.00, stdev=992.92, samples=4 00:30:25.311 iops : min= 6318, max= 6884, avg=6674.00, stdev=248.23, samples=4 00:30:25.311 write: IOPS=6691, BW=26.1MiB/s (27.4MB/s)(52.5MiB/2007msec); 0 zone resets 00:30:25.311 slat (nsec): min=2131, max=96687, avg=2340.60, stdev=885.73 00:30:25.311 clat (usec): min=1429, max=15094, avg=8332.77, stdev=881.84 00:30:25.311 lat (usec): min=1436, max=15096, avg=8335.11, stdev=881.81 00:30:25.311 clat percentiles (usec): 00:30:25.311 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7701], 00:30:25.311 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:30:25.311 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9634], 00:30:25.311 | 99.00th=[10290], 99.50th=[10683], 99.90th=[13042], 99.95th=[14222], 00:30:25.311 | 99.99th=[15008] 00:30:25.311 bw ( KiB/s): min=26496, max=27072, per=99.95%, avg=26752.00, stdev=250.61, samples=4 00:30:25.311 iops : min= 6624, max= 6768, avg=6688.00, stdev=62.65, samples=4 00:30:25.311 lat (msec) : 2=0.01%, 4=0.07%, 10=61.61%, 20=38.32% 00:30:25.311 cpu : usr=59.22%, sys=34.70%, ctx=52, majf=0, minf=14 00:30:25.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:25.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.311 issued rwts: total=13422,13429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.311 00:30:25.311 Run status group 0 (all jobs): 00:30:25.311 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2007-2007msec 00:30:25.311 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.5MiB (55.0MB), run=2007-2007msec 00:30:25.311 13:41:22 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:25.311 13:41:22 -- host/fio.sh@74 -- # sync 00:30:25.311 13:41:22 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:27.224 13:41:24 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:27.484 13:41:24 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:28.054 13:41:25 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:28.054 13:41:25 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:30.599 13:41:27 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:30.599 13:41:27 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:30.599 13:41:27 -- host/fio.sh@86 -- # nvmftestfini 00:30:30.599 13:41:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:30.599 13:41:27 -- nvmf/common.sh@116 -- # sync 00:30:30.599 13:41:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:30.599 13:41:27 -- nvmf/common.sh@119 -- # set +e 00:30:30.599 13:41:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:30.599 13:41:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:30.599 rmmod nvme_tcp 00:30:30.599 rmmod nvme_fabrics 00:30:30.599 rmmod nvme_keyring 00:30:30.599 13:41:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:30.599 13:41:27 -- nvmf/common.sh@123 -- # set -e 00:30:30.599 13:41:27 -- nvmf/common.sh@124 -- # return 0 00:30:30.599 13:41:27 -- nvmf/common.sh@477 -- # '[' -n 1137888 ']' 00:30:30.599 13:41:27 -- nvmf/common.sh@478 -- # killprocess 1137888 00:30:30.599 13:41:27 -- common/autotest_common.sh@926 -- # '[' -z 1137888 ']' 00:30:30.599 13:41:27 -- common/autotest_common.sh@930 -- # kill -0 1137888 00:30:30.599 13:41:27 -- common/autotest_common.sh@931 -- # uname 00:30:30.599 13:41:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.599 13:41:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1137888 00:30:30.599 13:41:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:30.599 13:41:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:30.599 13:41:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1137888' 00:30:30.599 killing process with pid 1137888 00:30:30.599 13:41:27 -- common/autotest_common.sh@945 -- # kill 1137888 00:30:30.599 13:41:27 -- common/autotest_common.sh@950 -- # wait 1137888 00:30:30.599 13:41:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:30.599 13:41:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:30.599 13:41:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:30.599 13:41:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.599 13:41:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:30.599 13:41:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.599 13:41:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.599 13:41:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.514 13:41:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:32.514 00:30:32.514 real 0m32.840s 00:30:32.514 user 2m44.898s 00:30:32.514 sys 0m10.012s 00:30:32.514 13:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.514 13:41:29 -- common/autotest_common.sh@10 -- # set +x 00:30:32.514 ************************************ 00:30:32.514 END TEST nvmf_fio_host 00:30:32.514 ************************************ 00:30:32.514 13:41:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:32.514 13:41:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:32.514 13:41:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:32.514 13:41:29 -- common/autotest_common.sh@10 -- # set +x 00:30:32.514 ************************************ 00:30:32.514 START TEST nvmf_failover 00:30:32.514 ************************************ 00:30:32.514 13:41:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:32.514 * Looking for test storage... 00:30:32.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:32.514 13:41:29 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.514 13:41:29 -- nvmf/common.sh@7 -- # uname -s 00:30:32.775 13:41:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.775 13:41:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.775 13:41:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.775 13:41:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.775 13:41:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.775 13:41:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.775 13:41:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.775 13:41:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.775 13:41:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.775 13:41:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.775 13:41:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.775 13:41:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.775 13:41:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.775 13:41:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.775 13:41:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.775 13:41:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.775 13:41:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.775 13:41:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.775 13:41:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.775 13:41:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.775 13:41:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.775 13:41:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.775 13:41:30 -- paths/export.sh@5 -- # export PATH 00:30:32.775 13:41:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.775 13:41:30 -- nvmf/common.sh@46 -- # : 0 00:30:32.775 13:41:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:32.775 13:41:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:32.775 13:41:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:32.775 13:41:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.775 13:41:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.775 13:41:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:32.775 13:41:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:32.775 13:41:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:32.775 13:41:30 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.775 13:41:30 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.775 13:41:30 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:32.775 13:41:30 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:32.775 13:41:30 -- host/failover.sh@18 -- # nvmftestinit 00:30:32.775 13:41:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:32.775 13:41:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.775 13:41:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:32.775 13:41:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:32.775 13:41:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:32.775 13:41:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.775 13:41:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.775 13:41:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.775 13:41:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:32.775 13:41:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:32.775 13:41:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:32.775 13:41:30 -- common/autotest_common.sh@10 -- # set +x 00:30:40.912 13:41:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:40.912 13:41:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:40.912 13:41:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:40.912 13:41:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:40.912 13:41:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:40.912 13:41:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:40.912 13:41:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:40.912 13:41:36 -- nvmf/common.sh@294 -- # net_devs=() 00:30:40.912 13:41:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:40.912 13:41:36 -- nvmf/common.sh@295 -- # e810=() 00:30:40.912 13:41:36 -- nvmf/common.sh@295 -- # local -ga e810 00:30:40.912 13:41:36 -- nvmf/common.sh@296 -- # x722=() 00:30:40.912 13:41:36 -- nvmf/common.sh@296 -- # local -ga x722 00:30:40.912 13:41:36 -- nvmf/common.sh@297 -- # mlx=() 00:30:40.912 13:41:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:40.912 13:41:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.912 13:41:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:40.912 13:41:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:40.912 13:41:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:40.912 13:41:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:40.912 13:41:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.912 13:41:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:40.912 13:41:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.912 13:41:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:40.912 13:41:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:40.912 13:41:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:40.912 13:41:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.912 13:41:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:40.912 13:41:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.912 13:41:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.912 13:41:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.912 13:41:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:40.913 13:41:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.913 13:41:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:40.913 13:41:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.913 13:41:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.913 13:41:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.913 13:41:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:40.913 13:41:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:40.913 13:41:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:40.913 13:41:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:40.913 13:41:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:40.913 13:41:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.913 13:41:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.913 13:41:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.913 13:41:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:40.913 13:41:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.913 13:41:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.913 13:41:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:40.913 13:41:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.913 13:41:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.913 13:41:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:40.913 13:41:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:40.913 13:41:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.913 13:41:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.913 13:41:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.913 13:41:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.913 13:41:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:40.913 13:41:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.913 13:41:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.913 13:41:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.913 13:41:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:40.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:40.913 00:30:40.913 --- 10.0.0.2 ping statistics --- 00:30:40.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.913 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:40.913 13:41:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:30:40.913 00:30:40.913 --- 10.0.0.1 ping statistics --- 00:30:40.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.913 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:30:40.913 13:41:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.913 13:41:37 -- nvmf/common.sh@410 -- # return 0 00:30:40.913 13:41:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:40.913 13:41:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.913 13:41:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:40.913 13:41:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:40.913 13:41:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.913 13:41:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:40.913 13:41:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:40.913 13:41:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:40.913 13:41:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:40.913 13:41:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:40.913 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:30:40.913 13:41:37 -- nvmf/common.sh@469 -- # nvmfpid=1147268 00:30:40.913 13:41:37 -- nvmf/common.sh@470 -- # waitforlisten 1147268 00:30:40.913 13:41:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:40.913 13:41:37 -- common/autotest_common.sh@819 -- # '[' -z 1147268 ']' 00:30:40.913 13:41:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.913 13:41:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:40.913 13:41:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.913 13:41:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:40.913 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:30:40.913 [2024-07-26 13:41:37.304636] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:40.913 [2024-07-26 13:41:37.304698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.913 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.913 [2024-07-26 13:41:37.395495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.913 [2024-07-26 13:41:37.441211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:40.913 [2024-07-26 13:41:37.441380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.913 [2024-07-26 13:41:37.441391] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.913 [2024-07-26 13:41:37.441407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.913 [2024-07-26 13:41:37.441548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.913 [2024-07-26 13:41:37.441714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.913 [2024-07-26 13:41:37.441715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.913 13:41:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:40.913 13:41:38 -- common/autotest_common.sh@852 -- # return 0 00:30:40.913 13:41:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:40.913 13:41:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:40.913 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:30:40.913 13:41:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.913 13:41:38 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:40.913 [2024-07-26 13:41:38.259994] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.913 13:41:38 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:41.173 Malloc0 00:30:41.173 13:41:38 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.173 13:41:38 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.433 13:41:38 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.693 [2024-07-26 13:41:38.942408] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.693 13:41:38 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.693 [2024-07-26 13:41:39.106851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:41.693 13:41:39 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.954 [2024-07-26 13:41:39.267341] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:41.954 13:41:39 -- host/failover.sh@31 -- # bdevperf_pid=1147711 00:30:41.954 13:41:39 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:41.954 13:41:39 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.954 13:41:39 -- host/failover.sh@34 -- # waitforlisten 1147711 /var/tmp/bdevperf.sock 00:30:41.954 13:41:39 -- common/autotest_common.sh@819 -- # '[' -z 1147711 ']' 00:30:41.954 13:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.954 13:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:41.954 13:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.954 13:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:41.954 13:41:39 -- common/autotest_common.sh@10 -- # set +x 00:30:42.937 13:41:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:42.937 13:41:40 -- common/autotest_common.sh@852 -- # return 0 00:30:42.937 13:41:40 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:42.937 NVMe0n1 00:30:42.937 13:41:40 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:43.198 00:30:43.198 13:41:40 -- host/failover.sh@39 -- # run_test_pid=1147948 00:30:43.198 13:41:40 -- host/failover.sh@41 -- # sleep 1 00:30:43.198 13:41:40 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:44.583 13:41:41 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.583 [2024-07-26 13:41:41.796292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.583 [2024-07-26 13:41:41.796450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 [2024-07-26 13:41:41.796559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eef00 is same with the state(5) to be set 00:30:44.584 13:41:41 -- host/failover.sh@45 -- # sleep 3 00:30:47.883 13:41:44 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.883 00:30:47.883 13:41:45 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.883 [2024-07-26 13:41:45.216182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.883 [2024-07-26 13:41:45.216307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 [2024-07-26 13:41:45.216527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef6f0 is same with the state(5) to be set 00:30:47.884 13:41:45 -- host/failover.sh@50 -- # sleep 3 00:30:51.181 13:41:48 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.181 [2024-07-26 13:41:48.371858] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.181 13:41:48 -- host/failover.sh@55 -- # sleep 1 00:30:52.120 13:41:49 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:52.120 [2024-07-26 13:41:49.541053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.120 [2024-07-26 13:41:49.541313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 [2024-07-26 13:41:49.541413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798ec0 is same with the state(5) to be set 00:30:52.121 13:41:49 -- host/failover.sh@59 -- # wait 1147948 00:30:58.712 0 00:30:58.712 13:41:55 -- host/failover.sh@61 -- # killprocess 1147711 00:30:58.712 13:41:55 -- common/autotest_common.sh@926 -- # '[' -z 1147711 ']' 00:30:58.712 13:41:55 -- common/autotest_common.sh@930 -- # kill -0 1147711 00:30:58.712 13:41:55 -- common/autotest_common.sh@931 -- # uname 00:30:58.712 13:41:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:58.712 13:41:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1147711 00:30:58.712 13:41:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:58.712 13:41:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:58.712 13:41:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1147711' 00:30:58.712 killing process with pid 1147711 00:30:58.712 13:41:55 -- common/autotest_common.sh@945 -- # kill 1147711 00:30:58.712 13:41:55 -- common/autotest_common.sh@950 -- # wait 1147711 00:30:58.712 13:41:55 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:58.712 [2024-07-26 13:41:39.342294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:58.712 [2024-07-26 13:41:39.342351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147711 ] 00:30:58.712 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.712 [2024-07-26 13:41:39.401305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.712 [2024-07-26 13:41:39.430059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.712 Running I/O for 15 seconds... 00:30:58.712 [2024-07-26 13:41:41.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-26 13:41:41.797512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-26 13:41:41.797519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.797980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.797990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.797997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-26 13:41:41.798061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-26 13:41:41.798128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-26 13:41:41.798138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-26 13:41:41.798718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-26 13:41:41.798745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.714 [2024-07-26 13:41:41.798752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.798801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.798970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.798986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.798995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-26 13:41:41.799348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-26 13:41:41.799368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-26 13:41:41.799378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:41.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:41.799403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.716 [2024-07-26 13:41:41.799434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.716 [2024-07-26 13:41:41.799442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37376 len:8 PRP1 0x0 PRP2 0x0 00:30:58.716 [2024-07-26 13:41:41.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799486] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b51960 was disconnected and freed. reset controller. 00:30:58.716 [2024-07-26 13:41:41.799502] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:58.716 [2024-07-26 13:41:41.799523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.716 [2024-07-26 13:41:41.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.716 [2024-07-26 13:41:41.799553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.716 [2024-07-26 13:41:41.799568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.716 [2024-07-26 13:41:41.799584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:41.799592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.716 [2024-07-26 13:41:41.801987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.716 [2024-07-26 13:41:41.802009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b32df0 (9): Bad file descriptor 00:30:58.716 [2024-07-26 13:41:41.836316] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.716 [2024-07-26 13:41:45.216759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.216989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.216996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-26 13:41:45.217246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.716 [2024-07-26 13:41:45.217253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-26 13:41:45.217749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.717 [2024-07-26 13:41:45.217756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.217909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.217926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.217943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.217959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.217985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.217992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.718 [2024-07-26 13:41:45.218279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-26 13:41:45.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-26 13:41:45.218388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-26 13:41:45.218715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-26 13:41:45.218928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.218937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3f310 is same with the state(5) to be set 00:30:58.719 [2024-07-26 13:41:45.218946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.719 [2024-07-26 13:41:45.218954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.719 [2024-07-26 13:41:45.218961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67304 len:8 PRP1 0x0 PRP2 0x0 00:30:58.719 [2024-07-26 13:41:45.218968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.219004] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b3f310 was disconnected and freed. reset controller. 00:30:58.719 [2024-07-26 13:41:45.219014] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:58.719 [2024-07-26 13:41:45.219034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.719 [2024-07-26 13:41:45.219043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-26 13:41:45.219051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.719 [2024-07-26 13:41:45.219059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:45.219066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.720 [2024-07-26 13:41:45.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:45.219081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.720 [2024-07-26 13:41:45.219089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:45.219096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.720 [2024-07-26 13:41:45.219128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b32df0 (9): Bad file descriptor 00:30:58.720 [2024-07-26 13:41:45.221323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.720 [2024-07-26 13:41:45.290957] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.720 [2024-07-26 13:41:49.541791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.541987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.541995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-26 13:41:49.542391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.720 [2024-07-26 13:41:49.542399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.721 [2024-07-26 13:41:49.542944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-26 13:41:49.542979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-26 13:41:49.542988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.542995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.722 [2024-07-26 13:41:49.543414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-26 13:41:49.543522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.722 [2024-07-26 13:41:49.543529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.723 [2024-07-26 13:41:49.543895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.723 [2024-07-26 13:41:49.543943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.543965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.723 [2024-07-26 13:41:49.543972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.723 [2024-07-26 13:41:49.543979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124752 len:8 PRP1 0x0 PRP2 0x0 00:30:58.723 [2024-07-26 13:41:49.543987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.544024] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf8430 was disconnected and freed. reset controller. 00:30:58.723 [2024-07-26 13:41:49.544034] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:58.723 [2024-07-26 13:41:49.544053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.723 [2024-07-26 13:41:49.544062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.544072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.723 [2024-07-26 13:41:49.544080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.544088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.723 [2024-07-26 13:41:49.544095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.544103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.723 [2024-07-26 13:41:49.544110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.723 [2024-07-26 13:41:49.544117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-07-26 13:41:49.546690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-07-26 13:41:49.546717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b32df0 (9): Bad file descriptor 00:30:58.723 [2024-07-26 13:41:49.620623] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.723 00:30:58.723 Latency(us) 00:30:58.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:58.723 Verification LBA range: start 0x0 length 0x4000 00:30:58.723 NVMe0n1 : 15.00 19797.86 77.34 675.36 0.00 6236.13 1146.88 21845.33 00:30:58.723 =================================================================================================================== 00:30:58.723 Total : 19797.86 77.34 675.36 0.00 6236.13 1146.88 21845.33 00:30:58.723 Received shutdown signal, test time was about 15.000000 seconds 00:30:58.723 00:30:58.723 Latency(us) 00:30:58.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.723 =================================================================================================================== 00:30:58.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.723 13:41:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:58.724 13:41:55 -- host/failover.sh@65 -- # count=3 00:30:58.724 13:41:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:58.724 13:41:55 -- host/failover.sh@73 -- # bdevperf_pid=1150814 00:30:58.724 13:41:55 -- host/failover.sh@75 -- # waitforlisten 1150814 /var/tmp/bdevperf.sock 00:30:58.724 13:41:55 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:58.724 13:41:55 -- common/autotest_common.sh@819 -- # '[' -z 1150814 ']' 00:30:58.724 13:41:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.724 13:41:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:58.724 13:41:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.724 13:41:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:58.724 13:41:55 -- common/autotest_common.sh@10 -- # set +x 00:30:59.669 13:41:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:59.669 13:41:56 -- common/autotest_common.sh@852 -- # return 0 00:30:59.670 13:41:56 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:59.670 [2024-07-26 13:41:56.924102] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:59.670 13:41:56 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:59.670 [2024-07-26 13:41:57.076444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:59.670 13:41:57 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.242 NVMe0n1 00:31:00.242 13:41:57 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.503 00:31:00.503 13:41:57 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.765 00:31:00.765 13:41:58 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.765 13:41:58 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:00.765 13:41:58 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.026 13:41:58 -- host/failover.sh@87 -- # sleep 3 00:31:04.329 13:42:01 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:04.329 13:42:01 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:04.329 13:42:01 -- host/failover.sh@90 -- # run_test_pid=1152080 00:31:04.329 13:42:01 -- host/failover.sh@92 -- # wait 1152080 00:31:04.329 13:42:01 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:05.310 0 00:31:05.310 13:42:02 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.310 [2024-07-26 13:41:56.024490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:05.310 [2024-07-26 13:41:56.024549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150814 ] 00:31:05.310 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.310 [2024-07-26 13:41:56.083483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.310 [2024-07-26 13:41:56.110288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.310 [2024-07-26 13:41:58.367597] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:05.310 [2024-07-26 13:41:58.367641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.310 [2024-07-26 13:41:58.367652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.310 [2024-07-26 13:41:58.367661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.310 [2024-07-26 13:41:58.367669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.310 [2024-07-26 13:41:58.367677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.310 [2024-07-26 13:41:58.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.310 [2024-07-26 13:41:58.367692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.310 [2024-07-26 13:41:58.367699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.310 [2024-07-26 13:41:58.367706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:05.310 [2024-07-26 13:41:58.367727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.310 [2024-07-26 13:41:58.367741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d0df0 (9): Bad file descriptor 00:31:05.310 [2024-07-26 13:41:58.415996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:05.310 Running I/O for 1 seconds... 00:31:05.310 00:31:05.310 Latency(us) 00:31:05.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.310 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:05.310 Verification LBA range: start 0x0 length 0x4000 00:31:05.310 NVMe0n1 : 1.00 20103.32 78.53 0.00 0.00 6337.84 1228.80 7700.48 00:31:05.310 =================================================================================================================== 00:31:05.310 Total : 20103.32 78.53 0.00 0.00 6337.84 1228.80 7700.48 00:31:05.310 13:42:02 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:05.310 13:42:02 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:05.572 13:42:02 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.572 13:42:03 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:05.572 13:42:03 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:05.833 13:42:03 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.093 13:42:03 -- host/failover.sh@101 -- # sleep 3 00:31:09.390 13:42:06 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:09.390 13:42:06 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:09.390 13:42:06 -- host/failover.sh@108 -- # killprocess 1150814 00:31:09.390 13:42:06 -- common/autotest_common.sh@926 -- # '[' -z 1150814 ']' 00:31:09.390 13:42:06 -- common/autotest_common.sh@930 -- # kill -0 1150814 00:31:09.390 13:42:06 -- common/autotest_common.sh@931 -- # uname 00:31:09.390 13:42:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.390 13:42:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1150814 00:31:09.390 13:42:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:09.390 13:42:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:09.390 13:42:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1150814' 00:31:09.390 killing process with pid 1150814 00:31:09.390 13:42:06 -- common/autotest_common.sh@945 -- # kill 1150814 00:31:09.390 13:42:06 -- common/autotest_common.sh@950 -- # wait 1150814 00:31:09.390 13:42:06 -- host/failover.sh@110 -- # sync 00:31:09.390 13:42:06 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.390 13:42:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:09.390 13:42:06 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.390 13:42:06 -- host/failover.sh@116 -- # nvmftestfini 00:31:09.390 13:42:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:09.390 13:42:06 -- nvmf/common.sh@116 -- # sync 00:31:09.390 13:42:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:09.390 13:42:06 -- nvmf/common.sh@119 -- # set +e 00:31:09.390 13:42:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:09.390 13:42:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:09.650 rmmod nvme_tcp 00:31:09.650 rmmod nvme_fabrics 00:31:09.650 rmmod nvme_keyring 00:31:09.650 13:42:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:09.650 13:42:06 -- nvmf/common.sh@123 -- # set -e 00:31:09.650 13:42:06 -- nvmf/common.sh@124 -- # return 0 00:31:09.650 13:42:06 -- nvmf/common.sh@477 -- # '[' -n 1147268 ']' 00:31:09.650 13:42:06 -- nvmf/common.sh@478 -- # killprocess 1147268 00:31:09.650 13:42:06 -- common/autotest_common.sh@926 -- # '[' -z 1147268 ']' 00:31:09.650 13:42:06 -- common/autotest_common.sh@930 -- # kill -0 1147268 00:31:09.650 13:42:06 -- common/autotest_common.sh@931 -- # uname 00:31:09.650 13:42:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.650 13:42:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1147268 00:31:09.650 13:42:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:09.650 13:42:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:09.650 13:42:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1147268' 00:31:09.650 killing process with pid 1147268 00:31:09.650 13:42:06 -- common/autotest_common.sh@945 -- # kill 1147268 00:31:09.650 13:42:06 -- common/autotest_common.sh@950 -- # wait 1147268 00:31:09.650 13:42:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:09.650 13:42:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:09.650 13:42:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:09.650 13:42:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.650 13:42:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:09.650 13:42:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.650 13:42:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.650 13:42:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.196 13:42:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:12.196 00:31:12.196 real 0m39.305s 00:31:12.196 user 2m1.219s 00:31:12.196 sys 0m8.030s 00:31:12.196 13:42:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.196 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:31:12.196 ************************************ 00:31:12.196 END TEST nvmf_failover 00:31:12.196 ************************************ 00:31:12.196 13:42:09 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:12.196 13:42:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:12.196 13:42:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.196 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:31:12.196 ************************************ 00:31:12.196 START TEST nvmf_discovery 00:31:12.196 ************************************ 00:31:12.196 13:42:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:12.196 * Looking for test storage... 00:31:12.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.196 13:42:09 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.196 13:42:09 -- nvmf/common.sh@7 -- # uname -s 00:31:12.196 13:42:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.196 13:42:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.196 13:42:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.196 13:42:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.196 13:42:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.196 13:42:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.196 13:42:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.196 13:42:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.196 13:42:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.196 13:42:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.196 13:42:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:12.196 13:42:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:12.196 13:42:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.196 13:42:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.196 13:42:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.196 13:42:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.196 13:42:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.196 13:42:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.196 13:42:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.196 13:42:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.196 13:42:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.196 13:42:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.196 13:42:09 -- paths/export.sh@5 -- # export PATH 00:31:12.196 13:42:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.196 13:42:09 -- nvmf/common.sh@46 -- # : 0 00:31:12.196 13:42:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:12.196 13:42:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:12.196 13:42:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:12.196 13:42:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.196 13:42:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.196 13:42:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:12.196 13:42:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:12.196 13:42:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:12.196 13:42:09 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:12.196 13:42:09 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:12.196 13:42:09 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:12.196 13:42:09 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:12.196 13:42:09 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:12.196 13:42:09 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:12.196 13:42:09 -- host/discovery.sh@25 -- # nvmftestinit 00:31:12.196 13:42:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:12.196 13:42:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.196 13:42:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:12.196 13:42:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:12.196 13:42:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:12.196 13:42:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.196 13:42:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:12.196 13:42:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.196 13:42:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:12.196 13:42:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:12.196 13:42:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:12.196 13:42:09 -- common/autotest_common.sh@10 -- # set +x 00:31:20.345 13:42:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:20.345 13:42:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:20.345 13:42:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:20.345 13:42:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:20.345 13:42:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:20.345 13:42:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:20.346 13:42:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:20.346 13:42:16 -- nvmf/common.sh@294 -- # net_devs=() 00:31:20.346 13:42:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:20.346 13:42:16 -- nvmf/common.sh@295 -- # e810=() 00:31:20.346 13:42:16 -- nvmf/common.sh@295 -- # local -ga e810 00:31:20.346 13:42:16 -- nvmf/common.sh@296 -- # x722=() 00:31:20.346 13:42:16 -- nvmf/common.sh@296 -- # local -ga x722 00:31:20.346 13:42:16 -- nvmf/common.sh@297 -- # mlx=() 00:31:20.346 13:42:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:20.346 13:42:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.346 13:42:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:20.346 13:42:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:20.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:20.346 13:42:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:20.346 13:42:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:20.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:20.346 13:42:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:20.346 13:42:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.346 13:42:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.346 13:42:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:20.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:20.346 13:42:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:20.346 13:42:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.346 13:42:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.346 13:42:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:20.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:20.346 13:42:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:20.346 13:42:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:20.346 13:42:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.346 13:42:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.346 13:42:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:20.346 13:42:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.346 13:42:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.346 13:42:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:20.346 13:42:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.346 13:42:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.346 13:42:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:20.346 13:42:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:20.346 13:42:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.346 13:42:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.346 13:42:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.346 13:42:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.346 13:42:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:20.346 13:42:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.346 13:42:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.346 13:42:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.346 13:42:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:20.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:31:20.346 00:31:20.346 --- 10.0.0.2 ping statistics --- 00:31:20.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.346 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:31:20.346 13:42:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:31:20.346 00:31:20.346 --- 10.0.0.1 ping statistics --- 00:31:20.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.346 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:31:20.346 13:42:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.346 13:42:16 -- nvmf/common.sh@410 -- # return 0 00:31:20.346 13:42:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:20.346 13:42:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.346 13:42:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:20.346 13:42:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.346 13:42:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:20.346 13:42:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:20.346 13:42:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:20.347 13:42:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:20.347 13:42:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:20.347 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 13:42:16 -- nvmf/common.sh@469 -- # nvmfpid=1157733 00:31:20.347 13:42:16 -- nvmf/common.sh@470 -- # waitforlisten 1157733 00:31:20.347 13:42:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:20.347 13:42:16 -- common/autotest_common.sh@819 -- # '[' -z 1157733 ']' 00:31:20.347 13:42:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.347 13:42:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:20.347 13:42:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.347 13:42:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:20.347 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 [2024-07-26 13:42:16.797019] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:20.347 [2024-07-26 13:42:16.797087] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.347 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.347 [2024-07-26 13:42:16.883426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.347 [2024-07-26 13:42:16.928648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:20.347 [2024-07-26 13:42:16.928801] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.347 [2024-07-26 13:42:16.928811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.347 [2024-07-26 13:42:16.928819] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.347 [2024-07-26 13:42:16.928843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.347 13:42:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:20.347 13:42:17 -- common/autotest_common.sh@852 -- # return 0 00:31:20.347 13:42:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:20.347 13:42:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 13:42:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.347 13:42:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.347 13:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 [2024-07-26 13:42:17.621011] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.347 13:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.347 13:42:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:20.347 13:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 [2024-07-26 13:42:17.633236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:20.347 13:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.347 13:42:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:20.347 13:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 null0 00:31:20.347 13:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.347 13:42:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:20.347 13:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 null1 00:31:20.347 13:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.347 13:42:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:20.347 13:42:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 13:42:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.347 13:42:17 -- host/discovery.sh@45 -- # hostpid=1157940 00:31:20.347 13:42:17 -- host/discovery.sh@46 -- # waitforlisten 1157940 /tmp/host.sock 00:31:20.347 13:42:17 -- common/autotest_common.sh@819 -- # '[' -z 1157940 ']' 00:31:20.347 13:42:17 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:20.347 13:42:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:20.347 13:42:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:20.347 13:42:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:20.347 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:20.347 13:42:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:20.347 13:42:17 -- common/autotest_common.sh@10 -- # set +x 00:31:20.347 [2024-07-26 13:42:17.724133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:20.347 [2024-07-26 13:42:17.724196] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157940 ] 00:31:20.347 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.347 [2024-07-26 13:42:17.787792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.608 [2024-07-26 13:42:17.825435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:20.608 [2024-07-26 13:42:17.825604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.182 13:42:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:21.182 13:42:18 -- common/autotest_common.sh@852 -- # return 0 00:31:21.182 13:42:18 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.182 13:42:18 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.182 13:42:18 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.182 13:42:18 -- host/discovery.sh@72 -- # notify_id=0 00:31:21.182 13:42:18 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # sort 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # xargs 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.182 13:42:18 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:21.182 13:42:18 -- host/discovery.sh@79 -- # get_bdev_list 00:31:21.182 13:42:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.182 13:42:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- host/discovery.sh@55 -- # sort 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- host/discovery.sh@55 -- # xargs 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.182 13:42:18 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:21.182 13:42:18 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.182 13:42:18 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.182 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.182 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # sort 00:31:21.182 13:42:18 -- host/discovery.sh@59 -- # xargs 00:31:21.182 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.443 13:42:18 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:21.443 13:42:18 -- host/discovery.sh@83 -- # get_bdev_list 00:31:21.443 13:42:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.443 13:42:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.443 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.443 13:42:18 -- host/discovery.sh@55 -- # sort 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:42:18 -- host/discovery.sh@55 -- # xargs 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:21.444 13:42:18 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:21.444 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.444 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # sort 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # xargs 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:21.444 13:42:18 -- host/discovery.sh@87 -- # get_bdev_list 00:31:21.444 13:42:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.444 13:42:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.444 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.444 13:42:18 -- host/discovery.sh@55 -- # sort 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:42:18 -- host/discovery.sh@55 -- # xargs 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:21.444 13:42:18 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.444 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 [2024-07-26 13:42:18.856388] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.444 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.444 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # sort 00:31:21.444 13:42:18 -- host/discovery.sh@59 -- # xargs 00:31:21.444 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.444 13:42:18 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:21.444 13:42:18 -- host/discovery.sh@93 -- # get_bdev_list 00:31:21.706 13:42:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.706 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.706 13:42:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.706 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.706 13:42:18 -- host/discovery.sh@55 -- # sort 00:31:21.706 13:42:18 -- host/discovery.sh@55 -- # xargs 00:31:21.706 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.706 13:42:18 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:21.706 13:42:18 -- host/discovery.sh@94 -- # get_notification_count 00:31:21.706 13:42:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:21.706 13:42:18 -- host/discovery.sh@74 -- # jq '. | length' 00:31:21.706 13:42:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.706 13:42:18 -- common/autotest_common.sh@10 -- # set +x 00:31:21.706 13:42:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.706 13:42:19 -- host/discovery.sh@74 -- # notification_count=0 00:31:21.706 13:42:19 -- host/discovery.sh@75 -- # notify_id=0 00:31:21.706 13:42:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:21.706 13:42:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:21.706 13:42:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.706 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:31:21.706 13:42:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.706 13:42:19 -- host/discovery.sh@100 -- # sleep 1 00:31:22.280 [2024-07-26 13:42:19.560436] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:22.280 [2024-07-26 13:42:19.560463] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:22.280 [2024-07-26 13:42:19.560478] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:22.280 [2024-07-26 13:42:19.650747] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:22.280 [2024-07-26 13:42:19.751664] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:22.280 [2024-07-26 13:42:19.751687] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:22.854 13:42:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:22.854 13:42:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.854 13:42:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.854 13:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.854 13:42:20 -- host/discovery.sh@59 -- # sort 00:31:22.854 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:22.854 13:42:20 -- host/discovery.sh@59 -- # xargs 00:31:22.854 13:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@102 -- # get_bdev_list 00:31:22.854 13:42:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.854 13:42:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.854 13:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.854 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:22.854 13:42:20 -- host/discovery.sh@55 -- # sort 00:31:22.854 13:42:20 -- host/discovery.sh@55 -- # xargs 00:31:22.854 13:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:22.854 13:42:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:22.854 13:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.854 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:22.854 13:42:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:22.854 13:42:20 -- host/discovery.sh@63 -- # sort -n 00:31:22.854 13:42:20 -- host/discovery.sh@63 -- # xargs 00:31:22.854 13:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:22.854 13:42:20 -- host/discovery.sh@104 -- # get_notification_count 00:31:22.854 13:42:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:22.855 13:42:20 -- host/discovery.sh@74 -- # jq '. | length' 00:31:22.855 13:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.855 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:22.855 13:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.855 13:42:20 -- host/discovery.sh@74 -- # notification_count=1 00:31:22.855 13:42:20 -- host/discovery.sh@75 -- # notify_id=1 00:31:22.855 13:42:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:22.855 13:42:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:22.855 13:42:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.855 13:42:20 -- common/autotest_common.sh@10 -- # set +x 00:31:22.855 13:42:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.855 13:42:20 -- host/discovery.sh@109 -- # sleep 1 00:31:23.799 13:42:21 -- host/discovery.sh@110 -- # get_bdev_list 00:31:23.799 13:42:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.799 13:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.799 13:42:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.799 13:42:21 -- common/autotest_common.sh@10 -- # set +x 00:31:23.799 13:42:21 -- host/discovery.sh@55 -- # sort 00:31:23.799 13:42:21 -- host/discovery.sh@55 -- # xargs 00:31:24.061 13:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.061 13:42:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:24.061 13:42:21 -- host/discovery.sh@111 -- # get_notification_count 00:31:24.061 13:42:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:24.061 13:42:21 -- host/discovery.sh@74 -- # jq '. | length' 00:31:24.061 13:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.061 13:42:21 -- common/autotest_common.sh@10 -- # set +x 00:31:24.061 13:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.061 13:42:21 -- host/discovery.sh@74 -- # notification_count=1 00:31:24.061 13:42:21 -- host/discovery.sh@75 -- # notify_id=2 00:31:24.061 13:42:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:24.061 13:42:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:24.061 13:42:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.061 13:42:21 -- common/autotest_common.sh@10 -- # set +x 00:31:24.061 [2024-07-26 13:42:21.359045] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:24.061 [2024-07-26 13:42:21.359215] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:24.061 [2024-07-26 13:42:21.359244] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.061 13:42:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.061 13:42:21 -- host/discovery.sh@117 -- # sleep 1 00:31:24.061 [2024-07-26 13:42:21.447489] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:24.061 [2024-07-26 13:42:21.509198] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:24.061 [2024-07-26 13:42:21.509219] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.061 [2024-07-26 13:42:21.509225] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:25.006 13:42:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:25.006 13:42:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.006 13:42:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.006 13:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.006 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:31:25.006 13:42:22 -- host/discovery.sh@59 -- # sort 00:31:25.006 13:42:22 -- host/discovery.sh@59 -- # xargs 00:31:25.006 13:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.006 13:42:22 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.006 13:42:22 -- host/discovery.sh@119 -- # get_bdev_list 00:31:25.006 13:42:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.006 13:42:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.006 13:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.006 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:31:25.006 13:42:22 -- host/discovery.sh@55 -- # sort 00:31:25.006 13:42:22 -- host/discovery.sh@55 -- # xargs 00:31:25.006 13:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.006 13:42:22 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:25.006 13:42:22 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:25.268 13:42:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:25.268 13:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.268 13:42:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:25.268 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 13:42:22 -- host/discovery.sh@63 -- # sort -n 00:31:25.268 13:42:22 -- host/discovery.sh@63 -- # xargs 00:31:25.268 13:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.268 13:42:22 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:25.268 13:42:22 -- host/discovery.sh@121 -- # get_notification_count 00:31:25.268 13:42:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.268 13:42:22 -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.268 13:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.268 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 13:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.268 13:42:22 -- host/discovery.sh@74 -- # notification_count=0 00:31:25.268 13:42:22 -- host/discovery.sh@75 -- # notify_id=2 00:31:25.268 13:42:22 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:25.268 13:42:22 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.268 13:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.268 13:42:22 -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 [2024-07-26 13:42:22.578344] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:25.268 [2024-07-26 13:42:22.578366] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.268 13:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.268 13:42:22 -- host/discovery.sh@127 -- # sleep 1 00:31:25.268 [2024-07-26 13:42:22.585030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.268 [2024-07-26 13:42:22.585050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.268 [2024-07-26 13:42:22.585059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.268 [2024-07-26 13:42:22.585067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.268 [2024-07-26 13:42:22.585075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.268 [2024-07-26 13:42:22.585082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.268 [2024-07-26 13:42:22.585090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.268 [2024-07-26 13:42:22.585097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.268 [2024-07-26 13:42:22.585104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.268 [2024-07-26 13:42:22.595042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.268 [2024-07-26 13:42:22.605084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.268 [2024-07-26 13:42:22.605667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.606211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.606226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.268 [2024-07-26 13:42:22.606237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.268 [2024-07-26 13:42:22.606255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.268 [2024-07-26 13:42:22.606291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.268 [2024-07-26 13:42:22.606300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.268 [2024-07-26 13:42:22.606308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.268 [2024-07-26 13:42:22.606323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.268 [2024-07-26 13:42:22.615139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.268 [2024-07-26 13:42:22.615716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.616434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.616472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.268 [2024-07-26 13:42:22.616483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.268 [2024-07-26 13:42:22.616502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.268 [2024-07-26 13:42:22.616532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.268 [2024-07-26 13:42:22.616540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.268 [2024-07-26 13:42:22.616548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.268 [2024-07-26 13:42:22.616563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.268 [2024-07-26 13:42:22.625194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.268 [2024-07-26 13:42:22.625678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.625944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.268 [2024-07-26 13:42:22.625954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.268 [2024-07-26 13:42:22.625962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.625974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.625992] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.626000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.626007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.626019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.635255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.635738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.636446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.636483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.636494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.636513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.636538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.636546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.636554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.636569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.645305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.645835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.646419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.646457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.646468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.646487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.646524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.646537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.646545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.646560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.655360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.655889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.656397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.656408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.656416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.656428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.656444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.656451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.656458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.656469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.665413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.665855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.666366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.666377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.666385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.666395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.666440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.666450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.666457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.666468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.675465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.675998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.676559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.676598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.676609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.676627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.676653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.676661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.676673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.676688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.685521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.686046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.686573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.686611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.686622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.686640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.686677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.686686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.686694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.686709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.695577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.696093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.696641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.696653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.696660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.696671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.696689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.696695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.696702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.696713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.705630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.269 [2024-07-26 13:42:22.706140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.706698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.269 [2024-07-26 13:42:22.706736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be11f0 with addr=10.0.0.2, port=4420 00:31:25.269 [2024-07-26 13:42:22.706747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be11f0 is same with the state(5) to be set 00:31:25.269 [2024-07-26 13:42:22.706765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be11f0 (9): Bad file descriptor 00:31:25.269 [2024-07-26 13:42:22.706802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.269 [2024-07-26 13:42:22.706811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.269 [2024-07-26 13:42:22.706820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.269 [2024-07-26 13:42:22.706843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.269 [2024-07-26 13:42:22.709792] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:25.269 [2024-07-26 13:42:22.709810] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:26.211 13:42:23 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:26.211 13:42:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.211 13:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.211 13:42:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:26.211 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.211 13:42:23 -- host/discovery.sh@59 -- # sort 00:31:26.211 13:42:23 -- host/discovery.sh@59 -- # xargs 00:31:26.211 13:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.211 13:42:23 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.211 13:42:23 -- host/discovery.sh@129 -- # get_bdev_list 00:31:26.211 13:42:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.211 13:42:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.211 13:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.211 13:42:23 -- host/discovery.sh@55 -- # sort 00:31:26.211 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.211 13:42:23 -- host/discovery.sh@55 -- # xargs 00:31:26.211 13:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:26.472 13:42:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:26.472 13:42:23 -- host/discovery.sh@63 -- # xargs 00:31:26.472 13:42:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:26.472 13:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.472 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 13:42:23 -- host/discovery.sh@63 -- # sort -n 00:31:26.472 13:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@131 -- # get_notification_count 00:31:26.472 13:42:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:26.472 13:42:23 -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.472 13:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.472 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 13:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@74 -- # notification_count=0 00:31:26.472 13:42:23 -- host/discovery.sh@75 -- # notify_id=2 00:31:26.472 13:42:23 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:26.472 13:42:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.472 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:31:26.472 13:42:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.472 13:42:23 -- host/discovery.sh@135 -- # sleep 1 00:31:27.484 13:42:24 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:27.484 13:42:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.484 13:42:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.484 13:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.484 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:31:27.484 13:42:24 -- host/discovery.sh@59 -- # sort 00:31:27.484 13:42:24 -- host/discovery.sh@59 -- # xargs 00:31:27.484 13:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.484 13:42:24 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:27.484 13:42:24 -- host/discovery.sh@137 -- # get_bdev_list 00:31:27.484 13:42:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.484 13:42:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.484 13:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.484 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:31:27.484 13:42:24 -- host/discovery.sh@55 -- # sort 00:31:27.484 13:42:24 -- host/discovery.sh@55 -- # xargs 00:31:27.484 13:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.484 13:42:24 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:27.484 13:42:24 -- host/discovery.sh@138 -- # get_notification_count 00:31:27.484 13:42:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:27.484 13:42:24 -- host/discovery.sh@74 -- # jq '. | length' 00:31:27.484 13:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.484 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:31:27.484 13:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.744 13:42:24 -- host/discovery.sh@74 -- # notification_count=2 00:31:27.744 13:42:24 -- host/discovery.sh@75 -- # notify_id=4 00:31:27.744 13:42:24 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:27.744 13:42:24 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.744 13:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.744 13:42:24 -- common/autotest_common.sh@10 -- # set +x 00:31:28.688 [2024-07-26 13:42:25.986448] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:28.688 [2024-07-26 13:42:25.986465] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:28.688 [2024-07-26 13:42:25.986478] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:28.688 [2024-07-26 13:42:26.075762] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:28.949 [2024-07-26 13:42:26.387672] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:28.949 [2024-07-26 13:42:26.387703] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:28.949 13:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.949 13:42:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:28.949 13:42:26 -- common/autotest_common.sh@640 -- # local es=0 00:31:28.949 13:42:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:28.949 13:42:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:28.949 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:28.949 13:42:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:28.949 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:28.949 13:42:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:28.949 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.949 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:28.949 request: 00:31:28.949 { 00:31:28.949 "name": "nvme", 00:31:28.949 "trtype": "tcp", 00:31:28.949 "traddr": "10.0.0.2", 00:31:28.949 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:28.949 "adrfam": "ipv4", 00:31:28.949 "trsvcid": "8009", 00:31:28.949 "wait_for_attach": true, 00:31:28.949 "method": "bdev_nvme_start_discovery", 00:31:28.949 "req_id": 1 00:31:28.949 } 00:31:28.949 Got JSON-RPC error response 00:31:28.949 response: 00:31:28.949 { 00:31:28.949 "code": -17, 00:31:28.949 "message": "File exists" 00:31:28.949 } 00:31:28.949 13:42:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:28.949 13:42:26 -- common/autotest_common.sh@643 -- # es=1 00:31:28.949 13:42:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:28.949 13:42:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:28.949 13:42:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:28.949 13:42:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:28.949 13:42:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:28.949 13:42:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:28.949 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.949 13:42:26 -- host/discovery.sh@67 -- # sort 00:31:28.949 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:28.949 13:42:26 -- host/discovery.sh@67 -- # xargs 00:31:29.210 13:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.210 13:42:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:29.210 13:42:26 -- host/discovery.sh@147 -- # get_bdev_list 00:31:29.210 13:42:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.210 13:42:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.210 13:42:26 -- host/discovery.sh@55 -- # sort 00:31:29.210 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.210 13:42:26 -- host/discovery.sh@55 -- # xargs 00:31:29.210 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:29.210 13:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.210 13:42:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.211 13:42:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.211 13:42:26 -- common/autotest_common.sh@640 -- # local es=0 00:31:29.211 13:42:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.211 13:42:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:29.211 13:42:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.211 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.211 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:29.211 request: 00:31:29.211 { 00:31:29.211 "name": "nvme_second", 00:31:29.211 "trtype": "tcp", 00:31:29.211 "traddr": "10.0.0.2", 00:31:29.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:29.211 "adrfam": "ipv4", 00:31:29.211 "trsvcid": "8009", 00:31:29.211 "wait_for_attach": true, 00:31:29.211 "method": "bdev_nvme_start_discovery", 00:31:29.211 "req_id": 1 00:31:29.211 } 00:31:29.211 Got JSON-RPC error response 00:31:29.211 response: 00:31:29.211 { 00:31:29.211 "code": -17, 00:31:29.211 "message": "File exists" 00:31:29.211 } 00:31:29.211 13:42:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:29.211 13:42:26 -- common/autotest_common.sh@643 -- # es=1 00:31:29.211 13:42:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:29.211 13:42:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:29.211 13:42:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:29.211 13:42:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:29.211 13:42:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:29.211 13:42:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:29.211 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.211 13:42:26 -- host/discovery.sh@67 -- # sort 00:31:29.211 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:29.211 13:42:26 -- host/discovery.sh@67 -- # xargs 00:31:29.211 13:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.211 13:42:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:29.211 13:42:26 -- host/discovery.sh@153 -- # get_bdev_list 00:31:29.211 13:42:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.211 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.211 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:29.211 13:42:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.211 13:42:26 -- host/discovery.sh@55 -- # sort 00:31:29.211 13:42:26 -- host/discovery.sh@55 -- # xargs 00:31:29.211 13:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.211 13:42:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:29.211 13:42:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.211 13:42:26 -- common/autotest_common.sh@640 -- # local es=0 00:31:29.211 13:42:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.211 13:42:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:29.211 13:42:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:29.211 13:42:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:29.211 13:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.211 13:42:26 -- common/autotest_common.sh@10 -- # set +x 00:31:30.600 [2024-07-26 13:42:27.655420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.600 [2024-07-26 13:42:27.655954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.600 [2024-07-26 13:42:27.655967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1e470 with addr=10.0.0.2, port=8010 00:31:30.600 [2024-07-26 13:42:27.655980] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:30.600 [2024-07-26 13:42:27.655987] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:30.600 [2024-07-26 13:42:27.655995] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:31.546 [2024-07-26 13:42:28.657697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.546 [2024-07-26 13:42:28.658207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.546 [2024-07-26 13:42:28.658219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1e470 with addr=10.0.0.2, port=8010 00:31:31.546 [2024-07-26 13:42:28.658229] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.546 [2024-07-26 13:42:28.658235] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.546 [2024-07-26 13:42:28.658242] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:32.491 [2024-07-26 13:42:29.659539] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:32.491 request: 00:31:32.491 { 00:31:32.491 "name": "nvme_second", 00:31:32.491 "trtype": "tcp", 00:31:32.491 "traddr": "10.0.0.2", 00:31:32.491 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:32.491 "adrfam": "ipv4", 00:31:32.491 "trsvcid": "8010", 00:31:32.491 "attach_timeout_ms": 3000, 00:31:32.491 "method": "bdev_nvme_start_discovery", 00:31:32.491 "req_id": 1 00:31:32.491 } 00:31:32.491 Got JSON-RPC error response 00:31:32.491 response: 00:31:32.491 { 00:31:32.491 "code": -110, 00:31:32.491 "message": "Connection timed out" 00:31:32.491 } 00:31:32.491 13:42:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:32.491 13:42:29 -- common/autotest_common.sh@643 -- # es=1 00:31:32.491 13:42:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:32.491 13:42:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:32.491 13:42:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:32.491 13:42:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:32.491 13:42:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:32.491 13:42:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:32.491 13:42:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.491 13:42:29 -- host/discovery.sh@67 -- # sort 00:31:32.491 13:42:29 -- common/autotest_common.sh@10 -- # set +x 00:31:32.491 13:42:29 -- host/discovery.sh@67 -- # xargs 00:31:32.491 13:42:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.491 13:42:29 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:32.491 13:42:29 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:32.491 13:42:29 -- host/discovery.sh@162 -- # kill 1157940 00:31:32.491 13:42:29 -- host/discovery.sh@163 -- # nvmftestfini 00:31:32.491 13:42:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:32.491 13:42:29 -- nvmf/common.sh@116 -- # sync 00:31:32.491 13:42:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:32.491 13:42:29 -- nvmf/common.sh@119 -- # set +e 00:31:32.491 13:42:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:32.491 13:42:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:32.491 rmmod nvme_tcp 00:31:32.491 rmmod nvme_fabrics 00:31:32.491 rmmod nvme_keyring 00:31:32.491 13:42:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:32.491 13:42:29 -- nvmf/common.sh@123 -- # set -e 00:31:32.491 13:42:29 -- nvmf/common.sh@124 -- # return 0 00:31:32.491 13:42:29 -- nvmf/common.sh@477 -- # '[' -n 1157733 ']' 00:31:32.491 13:42:29 -- nvmf/common.sh@478 -- # killprocess 1157733 00:31:32.491 13:42:29 -- common/autotest_common.sh@926 -- # '[' -z 1157733 ']' 00:31:32.491 13:42:29 -- common/autotest_common.sh@930 -- # kill -0 1157733 00:31:32.491 13:42:29 -- common/autotest_common.sh@931 -- # uname 00:31:32.491 13:42:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.491 13:42:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1157733 00:31:32.491 13:42:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:32.491 13:42:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:32.491 13:42:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1157733' 00:31:32.491 killing process with pid 1157733 00:31:32.491 13:42:29 -- common/autotest_common.sh@945 -- # kill 1157733 00:31:32.491 13:42:29 -- common/autotest_common.sh@950 -- # wait 1157733 00:31:32.491 13:42:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:32.491 13:42:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:32.492 13:42:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:32.492 13:42:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.492 13:42:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:32.492 13:42:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.492 13:42:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.492 13:42:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.048 13:42:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:35.048 00:31:35.048 real 0m22.798s 00:31:35.048 user 0m28.773s 00:31:35.048 sys 0m7.001s 00:31:35.048 13:42:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:35.048 13:42:32 -- common/autotest_common.sh@10 -- # set +x 00:31:35.048 ************************************ 00:31:35.048 END TEST nvmf_discovery 00:31:35.048 ************************************ 00:31:35.048 13:42:32 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:35.048 13:42:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:35.048 13:42:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:35.048 13:42:32 -- common/autotest_common.sh@10 -- # set +x 00:31:35.048 ************************************ 00:31:35.048 START TEST nvmf_discovery_remove_ifc 00:31:35.048 ************************************ 00:31:35.048 13:42:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:35.048 * Looking for test storage... 00:31:35.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.048 13:42:32 -- nvmf/common.sh@7 -- # uname -s 00:31:35.048 13:42:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.048 13:42:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.048 13:42:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.048 13:42:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.048 13:42:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.048 13:42:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.048 13:42:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.048 13:42:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.048 13:42:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.048 13:42:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.048 13:42:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.048 13:42:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.048 13:42:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.048 13:42:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.048 13:42:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.048 13:42:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.048 13:42:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.048 13:42:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.048 13:42:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.048 13:42:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.048 13:42:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.048 13:42:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.048 13:42:32 -- paths/export.sh@5 -- # export PATH 00:31:35.048 13:42:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.048 13:42:32 -- nvmf/common.sh@46 -- # : 0 00:31:35.048 13:42:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:35.048 13:42:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:35.048 13:42:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:35.048 13:42:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.048 13:42:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.048 13:42:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:35.048 13:42:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:35.048 13:42:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:35.048 13:42:32 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:35.048 13:42:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:35.048 13:42:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.048 13:42:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:35.048 13:42:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:35.048 13:42:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:35.048 13:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.048 13:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.048 13:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.048 13:42:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:35.048 13:42:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:35.048 13:42:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:35.048 13:42:32 -- common/autotest_common.sh@10 -- # set +x 00:31:41.657 13:42:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:41.657 13:42:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:41.657 13:42:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:41.657 13:42:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:41.657 13:42:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:41.657 13:42:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:41.657 13:42:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:41.657 13:42:38 -- nvmf/common.sh@294 -- # net_devs=() 00:31:41.657 13:42:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:41.657 13:42:38 -- nvmf/common.sh@295 -- # e810=() 00:31:41.657 13:42:38 -- nvmf/common.sh@295 -- # local -ga e810 00:31:41.657 13:42:38 -- nvmf/common.sh@296 -- # x722=() 00:31:41.657 13:42:38 -- nvmf/common.sh@296 -- # local -ga x722 00:31:41.657 13:42:38 -- nvmf/common.sh@297 -- # mlx=() 00:31:41.657 13:42:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:41.657 13:42:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.657 13:42:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:41.657 13:42:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:41.657 13:42:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:41.657 13:42:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:41.657 13:42:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:41.657 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:41.657 13:42:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:41.657 13:42:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:41.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:41.657 13:42:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:41.657 13:42:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:41.657 13:42:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:41.657 13:42:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.657 13:42:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:41.657 13:42:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.657 13:42:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:41.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:41.657 13:42:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.657 13:42:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:41.657 13:42:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.657 13:42:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:41.657 13:42:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.657 13:42:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:41.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:41.657 13:42:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.657 13:42:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:41.657 13:42:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:41.657 13:42:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:41.657 13:42:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:41.657 13:42:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:41.657 13:42:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.657 13:42:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.657 13:42:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.657 13:42:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:41.657 13:42:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.657 13:42:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.657 13:42:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:41.657 13:42:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.657 13:42:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.657 13:42:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:41.657 13:42:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:41.657 13:42:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.657 13:42:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.919 13:42:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.919 13:42:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.919 13:42:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:41.919 13:42:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.919 13:42:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.919 13:42:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.920 13:42:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:41.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:31:41.920 00:31:41.920 --- 10.0.0.2 ping statistics --- 00:31:41.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.920 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:31:41.920 13:42:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:31:41.920 00:31:41.920 --- 10.0.0.1 ping statistics --- 00:31:41.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.920 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:31:41.920 13:42:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.920 13:42:39 -- nvmf/common.sh@410 -- # return 0 00:31:41.920 13:42:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:41.920 13:42:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.920 13:42:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:41.920 13:42:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:41.920 13:42:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.920 13:42:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:41.920 13:42:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:41.920 13:42:39 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:41.920 13:42:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:41.920 13:42:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:41.920 13:42:39 -- common/autotest_common.sh@10 -- # set +x 00:31:41.920 13:42:39 -- nvmf/common.sh@469 -- # nvmfpid=1164629 00:31:41.920 13:42:39 -- nvmf/common.sh@470 -- # waitforlisten 1164629 00:31:41.920 13:42:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:41.920 13:42:39 -- common/autotest_common.sh@819 -- # '[' -z 1164629 ']' 00:31:41.920 13:42:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.920 13:42:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:41.920 13:42:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.920 13:42:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:41.920 13:42:39 -- common/autotest_common.sh@10 -- # set +x 00:31:42.182 [2024-07-26 13:42:39.431755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:42.182 [2024-07-26 13:42:39.431816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.182 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.182 [2024-07-26 13:42:39.518443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.182 [2024-07-26 13:42:39.563031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:42.182 [2024-07-26 13:42:39.563180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.182 [2024-07-26 13:42:39.563190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.182 [2024-07-26 13:42:39.563198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.182 [2024-07-26 13:42:39.563238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.127 13:42:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.127 13:42:40 -- common/autotest_common.sh@852 -- # return 0 00:31:43.127 13:42:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:43.127 13:42:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:43.127 13:42:40 -- common/autotest_common.sh@10 -- # set +x 00:31:43.127 13:42:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.127 13:42:40 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:43.127 13:42:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.127 13:42:40 -- common/autotest_common.sh@10 -- # set +x 00:31:43.127 [2024-07-26 13:42:40.308534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.127 [2024-07-26 13:42:40.316740] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:43.127 null0 00:31:43.127 [2024-07-26 13:42:40.348697] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.127 13:42:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.127 13:42:40 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1164664 00:31:43.127 13:42:40 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1164664 /tmp/host.sock 00:31:43.127 13:42:40 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:43.127 13:42:40 -- common/autotest_common.sh@819 -- # '[' -z 1164664 ']' 00:31:43.127 13:42:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:43.127 13:42:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:43.127 13:42:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:43.127 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:43.127 13:42:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:43.127 13:42:40 -- common/autotest_common.sh@10 -- # set +x 00:31:43.127 [2024-07-26 13:42:40.429256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:43.127 [2024-07-26 13:42:40.429320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164664 ] 00:31:43.127 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.127 [2024-07-26 13:42:40.494069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.127 [2024-07-26 13:42:40.532741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.127 [2024-07-26 13:42:40.532895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.071 13:42:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:44.071 13:42:41 -- common/autotest_common.sh@852 -- # return 0 00:31:44.071 13:42:41 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:44.071 13:42:41 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:44.071 13:42:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.071 13:42:41 -- common/autotest_common.sh@10 -- # set +x 00:31:44.071 13:42:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.071 13:42:41 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:44.071 13:42:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.071 13:42:41 -- common/autotest_common.sh@10 -- # set +x 00:31:44.071 13:42:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.071 13:42:41 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:44.071 13:42:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.071 13:42:41 -- common/autotest_common.sh@10 -- # set +x 00:31:45.013 [2024-07-26 13:42:42.267449] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:45.013 [2024-07-26 13:42:42.267478] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:45.013 [2024-07-26 13:42:42.267493] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:45.013 [2024-07-26 13:42:42.356776] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:45.275 [2024-07-26 13:42:42.583010] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:45.275 [2024-07-26 13:42:42.583052] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:45.275 [2024-07-26 13:42:42.583072] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:45.275 [2024-07-26 13:42:42.583086] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:45.275 [2024-07-26 13:42:42.583105] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:45.275 13:42:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.275 [2024-07-26 13:42:42.587425] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2026aa0 was disconnected and freed. delete nvme_qpair. 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.275 13:42:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.275 13:42:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.275 13:42:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:45.275 13:42:42 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.536 13:42:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.536 13:42:42 -- common/autotest_common.sh@10 -- # set +x 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.536 13:42:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:45.536 13:42:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:46.479 13:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:46.479 13:42:43 -- common/autotest_common.sh@10 -- # set +x 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:46.479 13:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:46.479 13:42:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.421 13:42:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.421 13:42:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.421 13:42:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.421 13:42:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.421 13:42:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.421 13:42:44 -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 13:42:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.421 13:42:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.700 13:42:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:47.700 13:42:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.690 13:42:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.690 13:42:45 -- common/autotest_common.sh@10 -- # set +x 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.690 13:42:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.690 13:42:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.632 13:42:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.632 13:42:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.632 13:42:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.632 13:42:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.632 13:42:46 -- common/autotest_common.sh@10 -- # set +x 00:31:49.632 13:42:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.632 13:42:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.632 13:42:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.632 13:42:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:49.632 13:42:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:50.576 [2024-07-26 13:42:48.023501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:50.577 [2024-07-26 13:42:48.023544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.577 [2024-07-26 13:42:48.023556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.577 [2024-07-26 13:42:48.023566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.577 [2024-07-26 13:42:48.023573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.577 [2024-07-26 13:42:48.023581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.577 [2024-07-26 13:42:48.023588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.577 [2024-07-26 13:42:48.023595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.577 [2024-07-26 13:42:48.023602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.577 [2024-07-26 13:42:48.023611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.577 [2024-07-26 13:42:48.023618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.577 [2024-07-26 13:42:48.023625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feceb0 is same with the state(5) to be set 00:31:50.577 [2024-07-26 13:42:48.033521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feceb0 (9): Bad file descriptor 00:31:50.577 13:42:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.577 [2024-07-26 13:42:48.043562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:50.577 13:42:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.577 13:42:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.577 13:42:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.577 13:42:48 -- common/autotest_common.sh@10 -- # set +x 00:31:50.577 13:42:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.577 13:42:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.974 [2024-07-26 13:42:49.105238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:52.918 [2024-07-26 13:42:50.129398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:52.918 [2024-07-26 13:42:50.129449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feceb0 with addr=10.0.0.2, port=4420 00:31:52.918 [2024-07-26 13:42:50.129464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feceb0 is same with the state(5) to be set 00:31:52.918 [2024-07-26 13:42:50.129488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:52.918 [2024-07-26 13:42:50.129497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:52.918 [2024-07-26 13:42:50.129504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:52.918 [2024-07-26 13:42:50.129512] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:52.918 [2024-07-26 13:42:50.129868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feceb0 (9): Bad file descriptor 00:31:52.918 [2024-07-26 13:42:50.129891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:52.918 [2024-07-26 13:42:50.129912] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:52.918 [2024-07-26 13:42:50.129935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.918 [2024-07-26 13:42:50.129946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.918 [2024-07-26 13:42:50.129956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.918 [2024-07-26 13:42:50.129964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.918 [2024-07-26 13:42:50.129972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.918 [2024-07-26 13:42:50.129980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.918 [2024-07-26 13:42:50.129987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.918 [2024-07-26 13:42:50.129994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.918 [2024-07-26 13:42:50.130002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.918 [2024-07-26 13:42:50.130009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.918 [2024-07-26 13:42:50.130017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:52.918 [2024-07-26 13:42:50.130490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fed2c0 (9): Bad file descriptor 00:31:52.918 [2024-07-26 13:42:50.131503] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:52.918 [2024-07-26 13:42:50.131515] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:52.918 13:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.918 13:42:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.918 13:42:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.862 13:42:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.862 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.862 13:42:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.862 13:42:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.862 13:42:51 -- common/autotest_common.sh@10 -- # set +x 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.862 13:42:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.862 13:42:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.124 13:42:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:54.124 13:42:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.695 [2024-07-26 13:42:52.142745] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:54.695 [2024-07-26 13:42:52.142767] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:54.695 [2024-07-26 13:42:52.142781] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:54.956 [2024-07-26 13:42:52.233058] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:54.956 13:42:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.956 13:42:52 -- common/autotest_common.sh@10 -- # set +x 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:54.956 13:42:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.956 [2024-07-26 13:42:52.413427] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:54.956 [2024-07-26 13:42:52.413467] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:54.956 [2024-07-26 13:42:52.413486] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:54.956 [2024-07-26 13:42:52.413499] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:54.956 [2024-07-26 13:42:52.413507] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:54.956 13:42:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.956 [2024-07-26 13:42:52.421989] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ffc6e0 was disconnected and freed. delete nvme_qpair. 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.347 13:42:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.347 13:42:53 -- common/autotest_common.sh@10 -- # set +x 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.347 13:42:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1164664 00:31:56.347 13:42:53 -- common/autotest_common.sh@926 -- # '[' -z 1164664 ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@930 -- # kill -0 1164664 00:31:56.347 13:42:53 -- common/autotest_common.sh@931 -- # uname 00:31:56.347 13:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1164664 00:31:56.347 13:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:56.347 13:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1164664' 00:31:56.347 killing process with pid 1164664 00:31:56.347 13:42:53 -- common/autotest_common.sh@945 -- # kill 1164664 00:31:56.347 13:42:53 -- common/autotest_common.sh@950 -- # wait 1164664 00:31:56.347 13:42:53 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:56.347 13:42:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:56.347 13:42:53 -- nvmf/common.sh@116 -- # sync 00:31:56.347 13:42:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:56.347 13:42:53 -- nvmf/common.sh@119 -- # set +e 00:31:56.347 13:42:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:56.347 13:42:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:56.347 rmmod nvme_tcp 00:31:56.347 rmmod nvme_fabrics 00:31:56.347 rmmod nvme_keyring 00:31:56.347 13:42:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:56.347 13:42:53 -- nvmf/common.sh@123 -- # set -e 00:31:56.347 13:42:53 -- nvmf/common.sh@124 -- # return 0 00:31:56.347 13:42:53 -- nvmf/common.sh@477 -- # '[' -n 1164629 ']' 00:31:56.347 13:42:53 -- nvmf/common.sh@478 -- # killprocess 1164629 00:31:56.347 13:42:53 -- common/autotest_common.sh@926 -- # '[' -z 1164629 ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@930 -- # kill -0 1164629 00:31:56.347 13:42:53 -- common/autotest_common.sh@931 -- # uname 00:31:56.347 13:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1164629 00:31:56.347 13:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:56.347 13:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:56.347 13:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1164629' 00:31:56.347 killing process with pid 1164629 00:31:56.347 13:42:53 -- common/autotest_common.sh@945 -- # kill 1164629 00:31:56.347 13:42:53 -- common/autotest_common.sh@950 -- # wait 1164629 00:31:56.608 13:42:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:56.608 13:42:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:56.608 13:42:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:56.608 13:42:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.608 13:42:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:56.608 13:42:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.608 13:42:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.608 13:42:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.523 13:42:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:58.523 00:31:58.523 real 0m23.850s 00:31:58.523 user 0m28.254s 00:31:58.523 sys 0m6.474s 00:31:58.523 13:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.523 13:42:55 -- common/autotest_common.sh@10 -- # set +x 00:31:58.523 ************************************ 00:31:58.523 END TEST nvmf_discovery_remove_ifc 00:31:58.523 ************************************ 00:31:58.523 13:42:55 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:58.523 13:42:55 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.523 13:42:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:58.523 13:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.523 13:42:55 -- common/autotest_common.sh@10 -- # set +x 00:31:58.523 ************************************ 00:31:58.523 START TEST nvmf_digest 00:31:58.523 ************************************ 00:31:58.523 13:42:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.785 * Looking for test storage... 00:31:58.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.785 13:42:56 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.785 13:42:56 -- nvmf/common.sh@7 -- # uname -s 00:31:58.785 13:42:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.785 13:42:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.785 13:42:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.785 13:42:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.785 13:42:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.785 13:42:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.785 13:42:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.785 13:42:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.785 13:42:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.785 13:42:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.785 13:42:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.785 13:42:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:58.785 13:42:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.785 13:42:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.785 13:42:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.785 13:42:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.785 13:42:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.785 13:42:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.785 13:42:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.785 13:42:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.785 13:42:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.785 13:42:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.785 13:42:56 -- paths/export.sh@5 -- # export PATH 00:31:58.786 13:42:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.786 13:42:56 -- nvmf/common.sh@46 -- # : 0 00:31:58.786 13:42:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:58.786 13:42:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:58.786 13:42:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:58.786 13:42:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.786 13:42:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.786 13:42:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:58.786 13:42:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:58.786 13:42:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:58.786 13:42:56 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:58.786 13:42:56 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:58.786 13:42:56 -- host/digest.sh@16 -- # runtime=2 00:31:58.786 13:42:56 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:58.786 13:42:56 -- host/digest.sh@132 -- # nvmftestinit 00:31:58.786 13:42:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:58.786 13:42:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.786 13:42:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:58.786 13:42:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:58.786 13:42:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:58.786 13:42:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.786 13:42:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.786 13:42:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.786 13:42:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:58.786 13:42:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:58.786 13:42:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:58.786 13:42:56 -- common/autotest_common.sh@10 -- # set +x 00:32:05.377 13:43:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:05.377 13:43:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:05.377 13:43:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:05.377 13:43:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:05.377 13:43:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:05.377 13:43:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:05.377 13:43:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:05.377 13:43:02 -- nvmf/common.sh@294 -- # net_devs=() 00:32:05.377 13:43:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:05.377 13:43:02 -- nvmf/common.sh@295 -- # e810=() 00:32:05.377 13:43:02 -- nvmf/common.sh@295 -- # local -ga e810 00:32:05.377 13:43:02 -- nvmf/common.sh@296 -- # x722=() 00:32:05.377 13:43:02 -- nvmf/common.sh@296 -- # local -ga x722 00:32:05.377 13:43:02 -- nvmf/common.sh@297 -- # mlx=() 00:32:05.377 13:43:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:05.377 13:43:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.377 13:43:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:05.377 13:43:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:05.377 13:43:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:05.377 13:43:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.377 13:43:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:05.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:05.377 13:43:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.377 13:43:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:05.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:05.377 13:43:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:05.377 13:43:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:05.377 13:43:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.377 13:43:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.377 13:43:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.377 13:43:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.377 13:43:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:05.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:05.377 13:43:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.377 13:43:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.377 13:43:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.377 13:43:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.377 13:43:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.377 13:43:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:05.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:05.377 13:43:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.377 13:43:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:05.377 13:43:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:05.377 13:43:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:05.378 13:43:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:05.378 13:43:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:05.378 13:43:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.378 13:43:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.378 13:43:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.378 13:43:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:05.378 13:43:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.378 13:43:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.378 13:43:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:05.378 13:43:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.378 13:43:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.378 13:43:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:05.378 13:43:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:05.378 13:43:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.378 13:43:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.378 13:43:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.378 13:43:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.378 13:43:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:05.378 13:43:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.638 13:43:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.638 13:43:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.638 13:43:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:05.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:32:05.638 00:32:05.638 --- 10.0.0.2 ping statistics --- 00:32:05.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.638 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:32:05.638 13:43:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:32:05.638 00:32:05.638 --- 10.0.0.1 ping statistics --- 00:32:05.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.638 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:32:05.638 13:43:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.638 13:43:02 -- nvmf/common.sh@410 -- # return 0 00:32:05.638 13:43:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:05.638 13:43:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.638 13:43:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:05.638 13:43:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:05.638 13:43:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.638 13:43:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:05.638 13:43:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:05.638 13:43:02 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:05.638 13:43:02 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:05.638 13:43:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.638 13:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.638 13:43:02 -- common/autotest_common.sh@10 -- # set +x 00:32:05.638 ************************************ 00:32:05.638 START TEST nvmf_digest_clean 00:32:05.638 ************************************ 00:32:05.638 13:43:02 -- common/autotest_common.sh@1104 -- # run_digest 00:32:05.638 13:43:02 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:05.638 13:43:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:05.638 13:43:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:05.638 13:43:02 -- common/autotest_common.sh@10 -- # set +x 00:32:05.638 13:43:03 -- nvmf/common.sh@469 -- # nvmfpid=1171449 00:32:05.638 13:43:03 -- nvmf/common.sh@470 -- # waitforlisten 1171449 00:32:05.638 13:43:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:05.638 13:43:03 -- common/autotest_common.sh@819 -- # '[' -z 1171449 ']' 00:32:05.638 13:43:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.638 13:43:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.638 13:43:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.638 13:43:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.638 13:43:03 -- common/autotest_common.sh@10 -- # set +x 00:32:05.638 [2024-07-26 13:43:03.052078] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:05.638 [2024-07-26 13:43:03.052129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.638 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.898 [2024-07-26 13:43:03.117492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.898 [2024-07-26 13:43:03.146001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:05.898 [2024-07-26 13:43:03.146123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.898 [2024-07-26 13:43:03.146132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.898 [2024-07-26 13:43:03.146139] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.898 [2024-07-26 13:43:03.146157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.469 13:43:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:06.469 13:43:03 -- common/autotest_common.sh@852 -- # return 0 00:32:06.469 13:43:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:06.469 13:43:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:06.469 13:43:03 -- common/autotest_common.sh@10 -- # set +x 00:32:06.469 13:43:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.469 13:43:03 -- host/digest.sh@120 -- # common_target_config 00:32:06.469 13:43:03 -- host/digest.sh@43 -- # rpc_cmd 00:32:06.469 13:43:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.469 13:43:03 -- common/autotest_common.sh@10 -- # set +x 00:32:06.469 null0 00:32:06.469 [2024-07-26 13:43:03.922634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.730 [2024-07-26 13:43:03.946819] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.730 13:43:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.730 13:43:03 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:06.730 13:43:03 -- host/digest.sh@77 -- # local rw bs qd 00:32:06.730 13:43:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:06.730 13:43:03 -- host/digest.sh@80 -- # rw=randread 00:32:06.730 13:43:03 -- host/digest.sh@80 -- # bs=4096 00:32:06.730 13:43:03 -- host/digest.sh@80 -- # qd=128 00:32:06.730 13:43:03 -- host/digest.sh@82 -- # bperfpid=1171615 00:32:06.730 13:43:03 -- host/digest.sh@83 -- # waitforlisten 1171615 /var/tmp/bperf.sock 00:32:06.730 13:43:03 -- common/autotest_common.sh@819 -- # '[' -z 1171615 ']' 00:32:06.730 13:43:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.730 13:43:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:06.730 13:43:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.730 13:43:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:06.730 13:43:03 -- common/autotest_common.sh@10 -- # set +x 00:32:06.730 13:43:03 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:06.730 [2024-07-26 13:43:03.996096] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:06.730 [2024-07-26 13:43:03.996144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171615 ] 00:32:06.730 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.730 [2024-07-26 13:43:04.071483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.730 [2024-07-26 13:43:04.100387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.301 13:43:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:07.301 13:43:04 -- common/autotest_common.sh@852 -- # return 0 00:32:07.301 13:43:04 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:07.301 13:43:04 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:07.301 13:43:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:07.561 13:43:04 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:07.561 13:43:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:07.821 nvme0n1 00:32:08.130 13:43:05 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:08.130 13:43:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:08.130 Running I/O for 2 seconds... 00:32:10.059 00:32:10.059 Latency(us) 00:32:10.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:10.059 nvme0n1 : 2.00 22545.04 88.07 0.00 0.00 5670.21 3099.31 16274.77 00:32:10.059 =================================================================================================================== 00:32:10.059 Total : 22545.04 88.07 0.00 0.00 5670.21 3099.31 16274.77 00:32:10.059 0 00:32:10.059 13:43:07 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:10.059 13:43:07 -- host/digest.sh@92 -- # get_accel_stats 00:32:10.059 13:43:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:10.059 13:43:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:10.059 | select(.opcode=="crc32c") 00:32:10.059 | "\(.module_name) \(.executed)"' 00:32:10.059 13:43:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:10.320 13:43:07 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:10.320 13:43:07 -- host/digest.sh@93 -- # exp_module=software 00:32:10.320 13:43:07 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:10.320 13:43:07 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:10.320 13:43:07 -- host/digest.sh@97 -- # killprocess 1171615 00:32:10.320 13:43:07 -- common/autotest_common.sh@926 -- # '[' -z 1171615 ']' 00:32:10.320 13:43:07 -- common/autotest_common.sh@930 -- # kill -0 1171615 00:32:10.320 13:43:07 -- common/autotest_common.sh@931 -- # uname 00:32:10.320 13:43:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:10.320 13:43:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1171615 00:32:10.320 13:43:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:10.320 13:43:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:10.320 13:43:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1171615' 00:32:10.320 killing process with pid 1171615 00:32:10.320 13:43:07 -- common/autotest_common.sh@945 -- # kill 1171615 00:32:10.320 Received shutdown signal, test time was about 2.000000 seconds 00:32:10.320 00:32:10.320 Latency(us) 00:32:10.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.320 =================================================================================================================== 00:32:10.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.320 13:43:07 -- common/autotest_common.sh@950 -- # wait 1171615 00:32:10.320 13:43:07 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:10.320 13:43:07 -- host/digest.sh@77 -- # local rw bs qd 00:32:10.320 13:43:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:10.320 13:43:07 -- host/digest.sh@80 -- # rw=randread 00:32:10.320 13:43:07 -- host/digest.sh@80 -- # bs=131072 00:32:10.320 13:43:07 -- host/digest.sh@80 -- # qd=16 00:32:10.320 13:43:07 -- host/digest.sh@82 -- # bperfpid=1172401 00:32:10.320 13:43:07 -- host/digest.sh@83 -- # waitforlisten 1172401 /var/tmp/bperf.sock 00:32:10.320 13:43:07 -- common/autotest_common.sh@819 -- # '[' -z 1172401 ']' 00:32:10.320 13:43:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.320 13:43:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:10.320 13:43:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.320 13:43:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:10.320 13:43:07 -- common/autotest_common.sh@10 -- # set +x 00:32:10.320 13:43:07 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:10.320 [2024-07-26 13:43:07.758624] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:10.320 [2024-07-26 13:43:07.758680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172401 ] 00:32:10.320 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.320 Zero copy mechanism will not be used. 00:32:10.320 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.581 [2024-07-26 13:43:07.833632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.581 [2024-07-26 13:43:07.862929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.153 13:43:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:11.153 13:43:08 -- common/autotest_common.sh@852 -- # return 0 00:32:11.153 13:43:08 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:11.153 13:43:08 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:11.153 13:43:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:11.414 13:43:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.414 13:43:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.675 nvme0n1 00:32:11.675 13:43:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:11.675 13:43:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:11.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:11.675 Zero copy mechanism will not be used. 00:32:11.675 Running I/O for 2 seconds... 00:32:14.223 00:32:14.223 Latency(us) 00:32:14.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.223 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:14.223 nvme0n1 : 2.00 1886.41 235.80 0.00 0.00 8477.18 6608.21 20534.61 00:32:14.223 =================================================================================================================== 00:32:14.223 Total : 1886.41 235.80 0.00 0.00 8477.18 6608.21 20534.61 00:32:14.223 0 00:32:14.223 13:43:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:14.223 13:43:11 -- host/digest.sh@92 -- # get_accel_stats 00:32:14.223 13:43:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:14.223 13:43:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:14.223 | select(.opcode=="crc32c") 00:32:14.223 | "\(.module_name) \(.executed)"' 00:32:14.223 13:43:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.223 13:43:11 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:14.223 13:43:11 -- host/digest.sh@93 -- # exp_module=software 00:32:14.223 13:43:11 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:14.223 13:43:11 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:14.223 13:43:11 -- host/digest.sh@97 -- # killprocess 1172401 00:32:14.223 13:43:11 -- common/autotest_common.sh@926 -- # '[' -z 1172401 ']' 00:32:14.223 13:43:11 -- common/autotest_common.sh@930 -- # kill -0 1172401 00:32:14.223 13:43:11 -- common/autotest_common.sh@931 -- # uname 00:32:14.223 13:43:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:14.223 13:43:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1172401 00:32:14.223 13:43:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:14.223 13:43:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:14.223 13:43:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1172401' 00:32:14.223 killing process with pid 1172401 00:32:14.223 13:43:11 -- common/autotest_common.sh@945 -- # kill 1172401 00:32:14.223 Received shutdown signal, test time was about 2.000000 seconds 00:32:14.223 00:32:14.223 Latency(us) 00:32:14.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.223 =================================================================================================================== 00:32:14.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.223 13:43:11 -- common/autotest_common.sh@950 -- # wait 1172401 00:32:14.223 13:43:11 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:14.223 13:43:11 -- host/digest.sh@77 -- # local rw bs qd 00:32:14.223 13:43:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:14.223 13:43:11 -- host/digest.sh@80 -- # rw=randwrite 00:32:14.223 13:43:11 -- host/digest.sh@80 -- # bs=4096 00:32:14.223 13:43:11 -- host/digest.sh@80 -- # qd=128 00:32:14.223 13:43:11 -- host/digest.sh@82 -- # bperfpid=1173186 00:32:14.223 13:43:11 -- host/digest.sh@83 -- # waitforlisten 1173186 /var/tmp/bperf.sock 00:32:14.223 13:43:11 -- common/autotest_common.sh@819 -- # '[' -z 1173186 ']' 00:32:14.223 13:43:11 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:14.224 13:43:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:14.224 13:43:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:14.224 13:43:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:14.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:14.224 13:43:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:14.224 13:43:11 -- common/autotest_common.sh@10 -- # set +x 00:32:14.224 [2024-07-26 13:43:11.540562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:14.224 [2024-07-26 13:43:11.540618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173186 ] 00:32:14.224 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.224 [2024-07-26 13:43:11.616902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.224 [2024-07-26 13:43:11.641163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.166 13:43:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:15.166 13:43:12 -- common/autotest_common.sh@852 -- # return 0 00:32:15.166 13:43:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:15.166 13:43:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:15.166 13:43:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:15.166 13:43:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:15.166 13:43:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:15.428 nvme0n1 00:32:15.428 13:43:12 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:15.428 13:43:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:15.428 Running I/O for 2 seconds... 00:32:17.348 00:32:17.348 Latency(us) 00:32:17.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.348 nvme0n1 : 2.00 22402.76 87.51 0.00 0.00 5707.88 2812.59 16930.13 00:32:17.348 =================================================================================================================== 00:32:17.348 Total : 22402.76 87.51 0.00 0.00 5707.88 2812.59 16930.13 00:32:17.348 0 00:32:17.348 13:43:14 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:17.348 13:43:14 -- host/digest.sh@92 -- # get_accel_stats 00:32:17.348 13:43:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:17.349 13:43:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:17.349 | select(.opcode=="crc32c") 00:32:17.349 | "\(.module_name) \(.executed)"' 00:32:17.349 13:43:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:17.609 13:43:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:17.609 13:43:14 -- host/digest.sh@93 -- # exp_module=software 00:32:17.609 13:43:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:17.609 13:43:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:17.609 13:43:14 -- host/digest.sh@97 -- # killprocess 1173186 00:32:17.609 13:43:14 -- common/autotest_common.sh@926 -- # '[' -z 1173186 ']' 00:32:17.609 13:43:14 -- common/autotest_common.sh@930 -- # kill -0 1173186 00:32:17.609 13:43:14 -- common/autotest_common.sh@931 -- # uname 00:32:17.609 13:43:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:17.609 13:43:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1173186 00:32:17.609 13:43:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:17.609 13:43:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:17.609 13:43:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1173186' 00:32:17.610 killing process with pid 1173186 00:32:17.610 13:43:14 -- common/autotest_common.sh@945 -- # kill 1173186 00:32:17.610 Received shutdown signal, test time was about 2.000000 seconds 00:32:17.610 00:32:17.610 Latency(us) 00:32:17.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.610 =================================================================================================================== 00:32:17.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.610 13:43:14 -- common/autotest_common.sh@950 -- # wait 1173186 00:32:17.870 13:43:15 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:17.870 13:43:15 -- host/digest.sh@77 -- # local rw bs qd 00:32:17.870 13:43:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:17.870 13:43:15 -- host/digest.sh@80 -- # rw=randwrite 00:32:17.870 13:43:15 -- host/digest.sh@80 -- # bs=131072 00:32:17.870 13:43:15 -- host/digest.sh@80 -- # qd=16 00:32:17.870 13:43:15 -- host/digest.sh@82 -- # bperfpid=1173879 00:32:17.870 13:43:15 -- host/digest.sh@83 -- # waitforlisten 1173879 /var/tmp/bperf.sock 00:32:17.870 13:43:15 -- common/autotest_common.sh@819 -- # '[' -z 1173879 ']' 00:32:17.870 13:43:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.870 13:43:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.870 13:43:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.870 13:43:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.870 13:43:15 -- common/autotest_common.sh@10 -- # set +x 00:32:17.870 13:43:15 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:17.870 [2024-07-26 13:43:15.139519] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:17.870 [2024-07-26 13:43:15.139577] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173879 ] 00:32:17.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.870 Zero copy mechanism will not be used. 00:32:17.870 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.870 [2024-07-26 13:43:15.213546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.870 [2024-07-26 13:43:15.240010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.442 13:43:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:18.442 13:43:15 -- common/autotest_common.sh@852 -- # return 0 00:32:18.442 13:43:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:18.442 13:43:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:18.442 13:43:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:18.703 13:43:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.703 13:43:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.964 nvme0n1 00:32:19.225 13:43:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:19.225 13:43:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:19.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.225 Zero copy mechanism will not be used. 00:32:19.225 Running I/O for 2 seconds... 00:32:21.140 00:32:21.140 Latency(us) 00:32:21.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:21.140 nvme0n1 : 2.01 2143.38 267.92 0.00 0.00 7449.62 5597.87 29054.29 00:32:21.140 =================================================================================================================== 00:32:21.140 Total : 2143.38 267.92 0.00 0.00 7449.62 5597.87 29054.29 00:32:21.140 0 00:32:21.140 13:43:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:21.140 13:43:18 -- host/digest.sh@92 -- # get_accel_stats 00:32:21.140 13:43:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:21.140 13:43:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:21.140 13:43:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:21.140 | select(.opcode=="crc32c") 00:32:21.140 | "\(.module_name) \(.executed)"' 00:32:21.401 13:43:18 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:21.401 13:43:18 -- host/digest.sh@93 -- # exp_module=software 00:32:21.401 13:43:18 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:21.401 13:43:18 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:21.401 13:43:18 -- host/digest.sh@97 -- # killprocess 1173879 00:32:21.401 13:43:18 -- common/autotest_common.sh@926 -- # '[' -z 1173879 ']' 00:32:21.401 13:43:18 -- common/autotest_common.sh@930 -- # kill -0 1173879 00:32:21.401 13:43:18 -- common/autotest_common.sh@931 -- # uname 00:32:21.401 13:43:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:21.401 13:43:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1173879 00:32:21.401 13:43:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:21.401 13:43:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:21.401 13:43:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1173879' 00:32:21.401 killing process with pid 1173879 00:32:21.401 13:43:18 -- common/autotest_common.sh@945 -- # kill 1173879 00:32:21.401 Received shutdown signal, test time was about 2.000000 seconds 00:32:21.401 00:32:21.401 Latency(us) 00:32:21.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.401 =================================================================================================================== 00:32:21.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:21.401 13:43:18 -- common/autotest_common.sh@950 -- # wait 1173879 00:32:21.401 13:43:18 -- host/digest.sh@126 -- # killprocess 1171449 00:32:21.401 13:43:18 -- common/autotest_common.sh@926 -- # '[' -z 1171449 ']' 00:32:21.401 13:43:18 -- common/autotest_common.sh@930 -- # kill -0 1171449 00:32:21.401 13:43:18 -- common/autotest_common.sh@931 -- # uname 00:32:21.401 13:43:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:21.401 13:43:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1171449 00:32:21.662 13:43:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:21.662 13:43:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:21.662 13:43:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1171449' 00:32:21.662 killing process with pid 1171449 00:32:21.662 13:43:18 -- common/autotest_common.sh@945 -- # kill 1171449 00:32:21.662 13:43:18 -- common/autotest_common.sh@950 -- # wait 1171449 00:32:21.662 00:32:21.662 real 0m16.040s 00:32:21.662 user 0m31.605s 00:32:21.662 sys 0m3.027s 00:32:21.662 13:43:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.662 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.662 ************************************ 00:32:21.662 END TEST nvmf_digest_clean 00:32:21.662 ************************************ 00:32:21.662 13:43:19 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:21.662 13:43:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:21.662 13:43:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.662 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.662 ************************************ 00:32:21.662 START TEST nvmf_digest_error 00:32:21.662 ************************************ 00:32:21.662 13:43:19 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:21.662 13:43:19 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:21.662 13:43:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:21.662 13:43:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:21.662 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.662 13:43:19 -- nvmf/common.sh@469 -- # nvmfpid=1174593 00:32:21.662 13:43:19 -- nvmf/common.sh@470 -- # waitforlisten 1174593 00:32:21.662 13:43:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:21.662 13:43:19 -- common/autotest_common.sh@819 -- # '[' -z 1174593 ']' 00:32:21.662 13:43:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.662 13:43:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:21.662 13:43:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.662 13:43:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:21.662 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.923 [2024-07-26 13:43:19.140130] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:21.923 [2024-07-26 13:43:19.140181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.923 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.923 [2024-07-26 13:43:19.207975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.923 [2024-07-26 13:43:19.235281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:21.923 [2024-07-26 13:43:19.235408] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.923 [2024-07-26 13:43:19.235417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.923 [2024-07-26 13:43:19.235423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.923 [2024-07-26 13:43:19.235440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.495 13:43:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:22.495 13:43:19 -- common/autotest_common.sh@852 -- # return 0 00:32:22.495 13:43:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:22.495 13:43:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:22.495 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:22.495 13:43:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.495 13:43:19 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:22.495 13:43:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.495 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:22.495 [2024-07-26 13:43:19.949464] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:22.495 13:43:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.495 13:43:19 -- host/digest.sh@104 -- # common_target_config 00:32:22.495 13:43:19 -- host/digest.sh@43 -- # rpc_cmd 00:32:22.495 13:43:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.495 13:43:19 -- common/autotest_common.sh@10 -- # set +x 00:32:22.756 null0 00:32:22.756 [2024-07-26 13:43:20.020389] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.756 [2024-07-26 13:43:20.044605] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.756 13:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.756 13:43:20 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:22.756 13:43:20 -- host/digest.sh@54 -- # local rw bs qd 00:32:22.756 13:43:20 -- host/digest.sh@56 -- # rw=randread 00:32:22.756 13:43:20 -- host/digest.sh@56 -- # bs=4096 00:32:22.756 13:43:20 -- host/digest.sh@56 -- # qd=128 00:32:22.756 13:43:20 -- host/digest.sh@58 -- # bperfpid=1174838 00:32:22.756 13:43:20 -- host/digest.sh@60 -- # waitforlisten 1174838 /var/tmp/bperf.sock 00:32:22.756 13:43:20 -- common/autotest_common.sh@819 -- # '[' -z 1174838 ']' 00:32:22.756 13:43:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:22.756 13:43:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:22.756 13:43:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:22.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:22.756 13:43:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:22.756 13:43:20 -- common/autotest_common.sh@10 -- # set +x 00:32:22.756 13:43:20 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:22.756 [2024-07-26 13:43:20.092549] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:22.756 [2024-07-26 13:43:20.092601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1174838 ] 00:32:22.756 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.756 [2024-07-26 13:43:20.168453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.756 [2024-07-26 13:43:20.195248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.698 13:43:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:23.699 13:43:20 -- common/autotest_common.sh@852 -- # return 0 00:32:23.699 13:43:20 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:23.699 13:43:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:23.699 13:43:20 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:23.699 13:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.699 13:43:20 -- common/autotest_common.sh@10 -- # set +x 00:32:23.699 13:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.699 13:43:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:23.699 13:43:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:23.960 nvme0n1 00:32:23.960 13:43:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:23.960 13:43:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.960 13:43:21 -- common/autotest_common.sh@10 -- # set +x 00:32:23.960 13:43:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.960 13:43:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:23.960 13:43:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:23.960 Running I/O for 2 seconds... 00:32:24.221 [2024-07-26 13:43:21.455144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.221 [2024-07-26 13:43:21.455176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.221 [2024-07-26 13:43:21.455186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.221 [2024-07-26 13:43:21.466397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.221 [2024-07-26 13:43:21.466419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.221 [2024-07-26 13:43:21.466426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.478245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.478264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.478271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.489438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.489457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.489464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.500724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.500744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.500750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.512563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.512582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.512588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.522969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.522987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.523000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.537554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.537572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.537579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.549387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.549405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.549412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.561298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.561317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.561323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.572189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.572210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.572217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.583247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.583273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.583279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.595146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.595163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.595170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.606810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.606828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.606835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.617786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.617810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.629767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.629788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.629794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.640710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.640727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.640733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.652651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.652668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.652674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.663846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.663870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.675189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.675210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.675216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.222 [2024-07-26 13:43:21.686314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.222 [2024-07-26 13:43:21.686331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.222 [2024-07-26 13:43:21.686338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.698104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.698121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.698128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.709255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.709273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.709279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.721191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.721212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.721218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.732295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.732312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.744071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.744088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.744094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.755249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.755266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.755272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.766035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.766051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.778098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.778115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.778121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.789287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.789303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.800352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.800370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.800376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.812511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.812528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.484 [2024-07-26 13:43:21.812534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.484 [2024-07-26 13:43:21.823439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.484 [2024-07-26 13:43:21.823457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.823466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.835419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.835436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.835442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.846051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.846069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.846075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.858164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.858181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.858187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.869184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.869205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.881007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.881024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.892116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.892133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.892140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.903307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.903324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.903330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.915117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.915134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.915140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.926150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.926167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.926173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.937964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.937981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.937987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.485 [2024-07-26 13:43:21.948980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.485 [2024-07-26 13:43:21.948998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.485 [2024-07-26 13:43:21.949004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:21.960101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:21.960118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:21.960125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:21.972019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:21.972036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:21.972042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:21.983033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:21.983051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:21.983058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:21.995000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:21.995017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:21.995024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.005999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.006017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.006024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.017053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.017070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.029172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.029190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.029196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.040375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.040392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.040398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.051820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.051837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.051844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.063644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.063661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.063668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.074600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.074617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.074623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.086108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.086125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.086131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.097301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.097318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.097324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.109074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.747 [2024-07-26 13:43:22.109091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.747 [2024-07-26 13:43:22.109098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.747 [2024-07-26 13:43:22.120116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.120135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.131282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.131298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.131305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.143204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.143221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.143227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.154291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.154308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.165539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.165555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.165562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.177597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.177614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.177620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.188774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.188791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.188798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.200503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.200520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.200526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.748 [2024-07-26 13:43:22.211650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:24.748 [2024-07-26 13:43:22.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.748 [2024-07-26 13:43:22.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.009 [2024-07-26 13:43:22.224673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.009 [2024-07-26 13:43:22.224690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.009 [2024-07-26 13:43:22.224696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.009 [2024-07-26 13:43:22.234423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.009 [2024-07-26 13:43:22.234440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.009 [2024-07-26 13:43:22.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.009 [2024-07-26 13:43:22.247071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.009 [2024-07-26 13:43:22.247088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.009 [2024-07-26 13:43:22.247094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.009 [2024-07-26 13:43:22.258132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.009 [2024-07-26 13:43:22.258148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.009 [2024-07-26 13:43:22.258154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.270013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.270030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.270037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.281109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.281126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.281132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.293107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.293124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.293130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.304079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.304096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.304102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.315129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.315146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.315155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.326369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.326386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.326392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.338292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.338309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.338316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.349220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.349237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.349244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.361080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.361097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.371999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.372016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.372022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.383217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.395068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.395084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.395091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.406240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.406257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.417293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.417313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.417319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.428453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.428470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.428476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.440559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.440576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.440582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.451464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.451482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.451489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.462719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.462736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.462742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.010 [2024-07-26 13:43:22.474464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.010 [2024-07-26 13:43:22.474480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.010 [2024-07-26 13:43:22.474487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.485466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.485483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.485490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.497463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.497480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.497486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.508527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.508545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.508551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.519728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.519745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.519752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.531786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.531803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.542773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.542790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.542796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.553820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.553837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.553843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.565942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.565959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.565965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.576984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.577000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.577007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.587883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.587900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.587907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.599845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.599861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.599868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.611005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.611025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.611032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.621971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.621988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.621995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.634009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.634026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.634032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.644734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.644751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.644758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.656864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.656881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.656887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.668057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.668075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.668081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.679068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.679085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.679091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.691170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.691187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.691194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.701831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.701849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.701855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.714041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.714058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.714064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.725251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.725268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.725275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.736317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.736334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.736341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.290 [2024-07-26 13:43:22.747410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.290 [2024-07-26 13:43:22.747427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.290 [2024-07-26 13:43:22.747433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.566 [2024-07-26 13:43:22.759275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.566 [2024-07-26 13:43:22.759292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.566 [2024-07-26 13:43:22.759299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.566 [2024-07-26 13:43:22.770386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.770402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.770409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.781615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.781632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.781638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.793495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.793512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.793518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.804666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.804693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.815802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.815819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.815825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.827767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.827784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.827791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.838663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.838680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.838687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.849925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.849941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.849948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.861982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.861998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.873178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.873196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.873206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.884249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.884265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.884271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.896233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.896250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.896256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.907263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.907283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.907289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.918337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.918353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.918360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.929599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.929615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.929622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.941660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.941677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.941683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.952607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.952624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.952630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.963730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.963747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.963754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.975836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.975853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.975859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.986699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.986716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.986723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:22.998640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:22.998657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:22.998663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:23.009844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:23.009861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.567 [2024-07-26 13:43:23.009868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.567 [2024-07-26 13:43:23.020940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.567 [2024-07-26 13:43:23.020957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.568 [2024-07-26 13:43:23.020964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.568 [2024-07-26 13:43:23.032808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.568 [2024-07-26 13:43:23.032824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.568 [2024-07-26 13:43:23.032831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.829 [2024-07-26 13:43:23.043979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.829 [2024-07-26 13:43:23.043996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.829 [2024-07-26 13:43:23.044002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.054804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.054821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.054827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.066982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.066998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.067004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.078224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.078241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.078248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.089325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.089343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.089351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.100404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.100421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.100430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.112369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.112387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.112393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.123648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.123664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.123671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.135432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.135449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.135456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.146521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.146545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.157497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.157515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.157521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.169764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.169781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.169787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.180766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.180783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.180790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.191827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.191844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.191850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.203037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.203057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.203063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.214873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.214890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.214896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.226002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.226020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.226026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.237878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.237895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.237902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.249060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.249078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.249084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.260872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.260896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.271872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.271889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.271896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.282959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.282978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.282984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:25.830 [2024-07-26 13:43:23.294205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:25.830 [2024-07-26 13:43:23.294222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:25.830 [2024-07-26 13:43:23.294229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.306207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.306224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.306230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.317107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.317124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.317130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.329145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.329163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.329169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.340188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.340209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.340216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.351271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.351294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.351300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.363273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.363290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.363297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.374312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.374330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.374336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.385382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.385399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.397359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.397381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.397387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.408481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.408499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.408505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.420415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.420433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.420439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 [2024-07-26 13:43:23.431442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d2590) 00:32:26.091 [2024-07-26 13:43:23.431459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.091 [2024-07-26 13:43:23.431466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.091 00:32:26.091 Latency(us) 00:32:26.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:26.091 nvme0n1 : 2.00 22264.86 86.97 0.00 0.00 5742.06 3085.65 18459.31 00:32:26.091 =================================================================================================================== 00:32:26.091 Total : 22264.86 86.97 0.00 0.00 5742.06 3085.65 18459.31 00:32:26.091 0 00:32:26.091 13:43:23 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:26.091 13:43:23 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:26.091 13:43:23 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:26.091 | .driver_specific 00:32:26.091 | .nvme_error 00:32:26.091 | .status_code 00:32:26.092 | .command_transient_transport_error' 00:32:26.092 13:43:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:26.352 13:43:23 -- host/digest.sh@71 -- # (( 174 > 0 )) 00:32:26.352 13:43:23 -- host/digest.sh@73 -- # killprocess 1174838 00:32:26.352 13:43:23 -- common/autotest_common.sh@926 -- # '[' -z 1174838 ']' 00:32:26.352 13:43:23 -- common/autotest_common.sh@930 -- # kill -0 1174838 00:32:26.352 13:43:23 -- common/autotest_common.sh@931 -- # uname 00:32:26.352 13:43:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:26.352 13:43:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1174838 00:32:26.352 13:43:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:26.352 13:43:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:26.352 13:43:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1174838' 00:32:26.352 killing process with pid 1174838 00:32:26.352 13:43:23 -- common/autotest_common.sh@945 -- # kill 1174838 00:32:26.352 Received shutdown signal, test time was about 2.000000 seconds 00:32:26.352 00:32:26.352 Latency(us) 00:32:26.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.352 =================================================================================================================== 00:32:26.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.352 13:43:23 -- common/autotest_common.sh@950 -- # wait 1174838 00:32:26.352 13:43:23 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:26.352 13:43:23 -- host/digest.sh@54 -- # local rw bs qd 00:32:26.352 13:43:23 -- host/digest.sh@56 -- # rw=randread 00:32:26.352 13:43:23 -- host/digest.sh@56 -- # bs=131072 00:32:26.353 13:43:23 -- host/digest.sh@56 -- # qd=16 00:32:26.353 13:43:23 -- host/digest.sh@58 -- # bperfpid=1175603 00:32:26.353 13:43:23 -- host/digest.sh@60 -- # waitforlisten 1175603 /var/tmp/bperf.sock 00:32:26.353 13:43:23 -- common/autotest_common.sh@819 -- # '[' -z 1175603 ']' 00:32:26.353 13:43:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.353 13:43:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:26.353 13:43:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.353 13:43:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:26.353 13:43:23 -- common/autotest_common.sh@10 -- # set +x 00:32:26.353 13:43:23 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:26.353 [2024-07-26 13:43:23.816930] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:26.353 [2024-07-26 13:43:23.816989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175603 ] 00:32:26.353 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.353 Zero copy mechanism will not be used. 00:32:26.613 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.614 [2024-07-26 13:43:23.890260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.614 [2024-07-26 13:43:23.916669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.185 13:43:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:27.185 13:43:24 -- common/autotest_common.sh@852 -- # return 0 00:32:27.185 13:43:24 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.185 13:43:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:27.446 13:43:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:27.446 13:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.446 13:43:24 -- common/autotest_common.sh@10 -- # set +x 00:32:27.446 13:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.446 13:43:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.446 13:43:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.707 nvme0n1 00:32:27.707 13:43:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:27.707 13:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.707 13:43:25 -- common/autotest_common.sh@10 -- # set +x 00:32:27.707 13:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.707 13:43:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:27.707 13:43:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.707 Zero copy mechanism will not be used. 00:32:27.707 Running I/O for 2 seconds... 00:32:27.707 [2024-07-26 13:43:25.144375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.707 [2024-07-26 13:43:25.144407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.707 [2024-07-26 13:43:25.144417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.707 [2024-07-26 13:43:25.159292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.707 [2024-07-26 13:43:25.159313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.707 [2024-07-26 13:43:25.159320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.707 [2024-07-26 13:43:25.170814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.707 [2024-07-26 13:43:25.170833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.707 [2024-07-26 13:43:25.170839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.969 [2024-07-26 13:43:25.181897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.969 [2024-07-26 13:43:25.181916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.969 [2024-07-26 13:43:25.181922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.969 [2024-07-26 13:43:25.193288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.969 [2024-07-26 13:43:25.193307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.969 [2024-07-26 13:43:25.193313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.969 [2024-07-26 13:43:25.207532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.969 [2024-07-26 13:43:25.207550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.969 [2024-07-26 13:43:25.207556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.969 [2024-07-26 13:43:25.219738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.969 [2024-07-26 13:43:25.219757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.969 [2024-07-26 13:43:25.219763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.969 [2024-07-26 13:43:25.229907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.229926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.229932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.239549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.239567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.239573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.248724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.248742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.248753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.259125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.259144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.259150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.269795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.269813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.279024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.279042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.279049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.288279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.288297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.288304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.297547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.297565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.297571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.306826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.306842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.316094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.316112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.316119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.325365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.325383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.325390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.334628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.334650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.343876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.343895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.343902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.353093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.353111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.353118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.362320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.362338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.362344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.371541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.371559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.371565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.380773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.380791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.380798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.390008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.390027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.390034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.399253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.399271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.399277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.408535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.408553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.408559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.417699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.417717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.417724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.427062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.427080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.427086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.970 [2024-07-26 13:43:25.436342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:27.970 [2024-07-26 13:43:25.436361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.970 [2024-07-26 13:43:25.436368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.445546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.445565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.445571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.454869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.454888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.454895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.464097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.464115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.464121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.473757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.473776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.473782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.484408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.484426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.494338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.494360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.494367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.503722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.503739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.503745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.512946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.512964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.512970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.522268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.522285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.522292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.531493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.232 [2024-07-26 13:43:25.531511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.232 [2024-07-26 13:43:25.531517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.232 [2024-07-26 13:43:25.540762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.540780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.540787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.550014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.550033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.550040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.559372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.559390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.559396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.569143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.569162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.569168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.579210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.579229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.579235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.589434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.589454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.589460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.598153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.598170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.598177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.607564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.607582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.607588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.616845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.616863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.616870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.626053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.626078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.635274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.635292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.635298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.644468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.644485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.644491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.653692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.653710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.653719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.662918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.662935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.662942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.672355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.672373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.672379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.681516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.681540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.690720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.690738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.690744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.233 [2024-07-26 13:43:25.699941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.233 [2024-07-26 13:43:25.699959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.233 [2024-07-26 13:43:25.699966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.709141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.709159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.709166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.718366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.718384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.718391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.727551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.727569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.727576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.736804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.736825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.736831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.746022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.746039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.746046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.755223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.755241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.755248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.764418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.764437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.764443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.773699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.773718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.773724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.782880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.782905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.792086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.792104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.792111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.801291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.801309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.801315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.810513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.810530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.810537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.819723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.819740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.819747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.828929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.828947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.828953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.838170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.838187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.838193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.847367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.847385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.856661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.856679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.856685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.865939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.865957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.865963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.875133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.875151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.495 [2024-07-26 13:43:25.875157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.495 [2024-07-26 13:43:25.884370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.495 [2024-07-26 13:43:25.884387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.884394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.893538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.893559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.893565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.902738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.902755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.902762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.911960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.911977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.911984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.921134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.921152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.921158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.930334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.930351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.930358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.939547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.939564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.939571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.948749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.948766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.948773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.957932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.957949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.957956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.496 [2024-07-26 13:43:25.967163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.496 [2024-07-26 13:43:25.967180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.496 [2024-07-26 13:43:25.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:25.976483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:25.976500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:25.976507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:25.985656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:25.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:25.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:25.994875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:25.994892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:25.994899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.004081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.004099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.004106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.013319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.013336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.022503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.022521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.022527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.031767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.031784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.031791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.040982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.041000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.041006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.050233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.050251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.059466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.059485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.059491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.068665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.068682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.068689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.077916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.077940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.087098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.087115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.087122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.096289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.096307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.096313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.105505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.105522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.105529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.114722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.114740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.114746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.123906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.123924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.123930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.133094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.133121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.142310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.142328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.142334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.151525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.151542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.151548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.160716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.160733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.160740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.169926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.169943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.169950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.179236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.179254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.179261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.758 [2024-07-26 13:43:26.188453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.758 [2024-07-26 13:43:26.188471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.758 [2024-07-26 13:43:26.188477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.759 [2024-07-26 13:43:26.197718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.759 [2024-07-26 13:43:26.197735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.759 [2024-07-26 13:43:26.197742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.759 [2024-07-26 13:43:26.206903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.759 [2024-07-26 13:43:26.206920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.759 [2024-07-26 13:43:26.206927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.759 [2024-07-26 13:43:26.216146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.759 [2024-07-26 13:43:26.216164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.759 [2024-07-26 13:43:26.216170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.759 [2024-07-26 13:43:26.225311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:28.759 [2024-07-26 13:43:26.225329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.759 [2024-07-26 13:43:26.225335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.020 [2024-07-26 13:43:26.234509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.020 [2024-07-26 13:43:26.234527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.020 [2024-07-26 13:43:26.234533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.020 [2024-07-26 13:43:26.243716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.020 [2024-07-26 13:43:26.243735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.020 [2024-07-26 13:43:26.243741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.020 [2024-07-26 13:43:26.252928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.020 [2024-07-26 13:43:26.252946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.252952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.262125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.262143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.262150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.271328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.271346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.280536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.280560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.289734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.289754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.289761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.299063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.299086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.308321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.308338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.308345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.317545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.317562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.317568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.326735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.326752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.326758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.335994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.336011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.336018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.345230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.345248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.345254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.354433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.354450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.363625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.363643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.363650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.372825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.372843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.372849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.382045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.382062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.391313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.391330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.391337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.400500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.400516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.400522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.409695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.409713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.409719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.418897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.418915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.418921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.428077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.428094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.428100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.437369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.437387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.437393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.446605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.446623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.446633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.455864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.455882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.455889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.465198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.465222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.465228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.474409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.474427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.474433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.483648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.021 [2024-07-26 13:43:26.483665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.021 [2024-07-26 13:43:26.483672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.021 [2024-07-26 13:43:26.492849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.022 [2024-07-26 13:43:26.492867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.022 [2024-07-26 13:43:26.492873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.502053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.502070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.502076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.511260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.511278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.511285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.520495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.520513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.520519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.529710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.529730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.529737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.538909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.538926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.538932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.548120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.548137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.548144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.557345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.557361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.557368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.566679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.566703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.575771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.575788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.575795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.585064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.585081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.594285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.594302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.594309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.603474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.603491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.603500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.612683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.612701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.612707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.621854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.621872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.621879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.631035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.631052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.631059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.640243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.640261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.640267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.649441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.649459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.284 [2024-07-26 13:43:26.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.284 [2024-07-26 13:43:26.658722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.284 [2024-07-26 13:43:26.658739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.658745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.667922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.667939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.667945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.677205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.677224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.677231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.686363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.686441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.686449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.695685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.695703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.695710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.704904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.704922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.704929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.714134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.714152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.714159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.723409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.723426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.723432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.732648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.732666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.741871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.741889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.741895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.285 [2024-07-26 13:43:26.751088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.285 [2024-07-26 13:43:26.751105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.285 [2024-07-26 13:43:26.751112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.760297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.760316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.760323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.769627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.769646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.769652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.778804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.778822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.778829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.788069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.788087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.788093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.797370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.797388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.797394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.806575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.806592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.806599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.815765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.815789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.825024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.825041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.825048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.834274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.834291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.834298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.843498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.843516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.843526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.852692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.852710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.852717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.861894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.861912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.861919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.871118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.871136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.871143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.880336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.547 [2024-07-26 13:43:26.880353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.547 [2024-07-26 13:43:26.880360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.547 [2024-07-26 13:43:26.889536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.889554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.889560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.898744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.898762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.898769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.908061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.908079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.908086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.917264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.917282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.917289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.926507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.926535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.935765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.935784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.935790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.945037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.945055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.945062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.954186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.954210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.954217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.963390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.963407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.963414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.972616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.972634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.972641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.981825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.981844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.981850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:26.991044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:26.991063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:26.991069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:27.000263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:27.000281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:27.000290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:27.009468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:27.009486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:27.009492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.548 [2024-07-26 13:43:27.018674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.548 [2024-07-26 13:43:27.018692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.548 [2024-07-26 13:43:27.018699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.809 [2024-07-26 13:43:27.027910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.809 [2024-07-26 13:43:27.027927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.809 [2024-07-26 13:43:27.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.809 [2024-07-26 13:43:27.037109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.809 [2024-07-26 13:43:27.037127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.809 [2024-07-26 13:43:27.037134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.809 [2024-07-26 13:43:27.046417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.809 [2024-07-26 13:43:27.046435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.809 [2024-07-26 13:43:27.046442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.809 [2024-07-26 13:43:27.055606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.809 [2024-07-26 13:43:27.055624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.055630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.064858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.064876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.064882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.074082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.074100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.074107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.083316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.083338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.083344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.092536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.092554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.092560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.101723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.101742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.101748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.110913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.110930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.110936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:29.810 [2024-07-26 13:43:27.120243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff400) 00:32:29.810 [2024-07-26 13:43:27.120260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.810 [2024-07-26 13:43:27.120267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.810 00:32:29.810 Latency(us) 00:32:29.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.810 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:29.810 nvme0n1 : 2.00 3293.17 411.65 0.00 0.00 4855.77 4287.15 18240.85 00:32:29.810 =================================================================================================================== 00:32:29.810 Total : 3293.17 411.65 0.00 0.00 4855.77 4287.15 18240.85 00:32:29.810 0 00:32:29.810 13:43:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:29.810 13:43:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:29.810 13:43:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:29.810 | .driver_specific 00:32:29.810 | .nvme_error 00:32:29.810 | .status_code 00:32:29.810 | .command_transient_transport_error' 00:32:29.810 13:43:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:30.071 13:43:27 -- host/digest.sh@71 -- # (( 212 > 0 )) 00:32:30.071 13:43:27 -- host/digest.sh@73 -- # killprocess 1175603 00:32:30.071 13:43:27 -- common/autotest_common.sh@926 -- # '[' -z 1175603 ']' 00:32:30.071 13:43:27 -- common/autotest_common.sh@930 -- # kill -0 1175603 00:32:30.071 13:43:27 -- common/autotest_common.sh@931 -- # uname 00:32:30.071 13:43:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:30.071 13:43:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1175603 00:32:30.071 13:43:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:30.071 13:43:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:30.071 13:43:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1175603' 00:32:30.071 killing process with pid 1175603 00:32:30.071 13:43:27 -- common/autotest_common.sh@945 -- # kill 1175603 00:32:30.071 Received shutdown signal, test time was about 2.000000 seconds 00:32:30.071 00:32:30.071 Latency(us) 00:32:30.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.071 =================================================================================================================== 00:32:30.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.071 13:43:27 -- common/autotest_common.sh@950 -- # wait 1175603 00:32:30.071 13:43:27 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:30.071 13:43:27 -- host/digest.sh@54 -- # local rw bs qd 00:32:30.071 13:43:27 -- host/digest.sh@56 -- # rw=randwrite 00:32:30.071 13:43:27 -- host/digest.sh@56 -- # bs=4096 00:32:30.071 13:43:27 -- host/digest.sh@56 -- # qd=128 00:32:30.071 13:43:27 -- host/digest.sh@58 -- # bperfpid=1176329 00:32:30.071 13:43:27 -- host/digest.sh@60 -- # waitforlisten 1176329 /var/tmp/bperf.sock 00:32:30.071 13:43:27 -- common/autotest_common.sh@819 -- # '[' -z 1176329 ']' 00:32:30.071 13:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:30.072 13:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:30.072 13:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:30.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:30.072 13:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:30.072 13:43:27 -- common/autotest_common.sh@10 -- # set +x 00:32:30.072 13:43:27 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:30.072 [2024-07-26 13:43:27.506668] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:30.072 [2024-07-26 13:43:27.506724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176329 ] 00:32:30.072 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.332 [2024-07-26 13:43:27.582546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.332 [2024-07-26 13:43:27.608325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.902 13:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:30.902 13:43:28 -- common/autotest_common.sh@852 -- # return 0 00:32:30.902 13:43:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.902 13:43:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:31.163 13:43:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:31.163 13:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.163 13:43:28 -- common/autotest_common.sh@10 -- # set +x 00:32:31.163 13:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.163 13:43:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.163 13:43:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.425 nvme0n1 00:32:31.425 13:43:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:31.425 13:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.425 13:43:28 -- common/autotest_common.sh@10 -- # set +x 00:32:31.425 13:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.425 13:43:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:31.425 13:43:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.425 Running I/O for 2 seconds... 00:32:31.425 [2024-07-26 13:43:28.855985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190fb8b8 00:32:31.425 [2024-07-26 13:43:28.856943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.425 [2024-07-26 13:43:28.856971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.425 [2024-07-26 13:43:28.868036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190f57b0 00:32:31.425 [2024-07-26 13:43:28.869083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.425 [2024-07-26 13:43:28.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:31.425 [2024-07-26 13:43:28.879627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190f57b0 00:32:31.425 [2024-07-26 13:43:28.880688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.425 [2024-07-26 13:43:28.880704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:31.425 [2024-07-26 13:43:28.891184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190f57b0 00:32:31.425 [2024-07-26 13:43:28.892525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.425 [2024-07-26 13:43:28.892542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.902723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.903136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.903153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.914667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.914905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.914921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.926504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.926884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.938385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.938716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.938732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.950279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.950524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.950539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.962211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.962604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.962621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.974033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.974298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.974314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.985914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.986156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.986173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:28.997702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:28.998030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:28.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.009572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.009834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:29.009850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.021402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.021769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:29.021786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.033236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.033485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:29.033501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.045066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:29.045414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.056847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.057105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.686 [2024-07-26 13:43:29.057125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.686 [2024-07-26 13:43:29.068697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.686 [2024-07-26 13:43:29.069027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.069042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.080496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.080760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.080776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.092321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.092699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.104118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.104373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.104389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.115984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.116345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.116362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.127865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.128102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.128117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.139750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.140075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.140091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.687 [2024-07-26 13:43:29.151590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.687 [2024-07-26 13:43:29.151950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.687 [2024-07-26 13:43:29.151966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.163500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.175345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.175693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.187185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.187451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.187467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.199037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.199286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.210848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.211211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.211226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.222697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.222946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.222962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.234533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.234784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.234799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.246337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.246571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.246594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.258108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.258402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.258418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.269941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.270332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.270347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.281751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.282015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.282030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.293578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.293938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.293954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.305414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.948 [2024-07-26 13:43:29.305675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.948 [2024-07-26 13:43:29.305691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.948 [2024-07-26 13:43:29.317235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.317615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.317631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.329051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.329428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.340859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.341255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.341271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.352618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.352881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.352896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.364419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.364779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.364794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.376254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.376635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.376650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.388133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.388560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.388576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.400012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.400246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.400261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:31.949 [2024-07-26 13:43:29.411823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:31.949 [2024-07-26 13:43:29.412226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.949 [2024-07-26 13:43:29.412242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.423628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.423986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.424001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.435464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.435864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.435879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.447283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.447647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.459118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.459484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.459500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.470945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.471182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.471205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.482756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.483122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.483138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.494582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.494973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.494989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.506329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.506571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.506586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.518158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.518541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.518557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.529973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.530364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.530379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.541824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.542050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.553704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.554121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.554136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.565473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.565844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.565859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.577264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.577683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.577699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.589077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.589319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.589334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.600885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.601249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.601265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.210 [2024-07-26 13:43:29.612708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.210 [2024-07-26 13:43:29.613089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.210 [2024-07-26 13:43:29.613105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.211 [2024-07-26 13:43:29.624525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.211 [2024-07-26 13:43:29.624891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.211 [2024-07-26 13:43:29.624907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.211 [2024-07-26 13:43:29.636301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.211 [2024-07-26 13:43:29.636619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.211 [2024-07-26 13:43:29.636635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.211 [2024-07-26 13:43:29.648155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.211 [2024-07-26 13:43:29.648489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.211 [2024-07-26 13:43:29.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.211 [2024-07-26 13:43:29.659941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.211 [2024-07-26 13:43:29.660302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.211 [2024-07-26 13:43:29.660318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.211 [2024-07-26 13:43:29.671980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.211 [2024-07-26 13:43:29.672317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.211 [2024-07-26 13:43:29.672333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.683801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.684028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.684043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.695601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.695926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.695942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.707413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.707762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.707778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.719255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.719599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.719615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.731082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.731461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.731477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.742906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.743153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.743170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.754726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.755119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.766514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.766933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.766948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.778309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.778695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.790197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.790601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.790618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.801984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.802215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.472 [2024-07-26 13:43:29.802229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.472 [2024-07-26 13:43:29.813904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.472 [2024-07-26 13:43:29.814282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.814298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.825695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.826054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.826070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.837479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.837793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.837808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.849272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.849666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.849682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.861067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.861339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.861354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.872877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.873217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.873233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.884628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.884985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.885001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.896499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.896728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.896743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.908290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.908662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.908677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.920079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.920453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.920469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.931890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.932252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.932268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.473 [2024-07-26 13:43:29.943695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.473 [2024-07-26 13:43:29.944046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.473 [2024-07-26 13:43:29.944062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.734 [2024-07-26 13:43:29.955490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.734 [2024-07-26 13:43:29.955721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.734 [2024-07-26 13:43:29.955736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.734 [2024-07-26 13:43:29.967307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.734 [2024-07-26 13:43:29.967652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.734 [2024-07-26 13:43:29.967668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.734 [2024-07-26 13:43:29.979126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.734 [2024-07-26 13:43:29.979365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.734 [2024-07-26 13:43:29.979379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.734 [2024-07-26 13:43:29.991014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.734 [2024-07-26 13:43:29.991371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.734 [2024-07-26 13:43:29.991388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.003280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.003734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.003753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.015056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.015310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.015326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.026871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.027122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.027138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.038751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.039023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.039044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.050573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.050820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.050836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.062449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.062816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.062832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.074298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.074639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.074655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.086153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.086441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.086460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.098038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.098289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.098305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.109876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.121719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.122086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.122104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.133569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.133809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.133826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.145330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.145590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.157119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.157487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.169042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.169372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.169389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.180920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.181288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.181304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.192700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.193092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.193108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.735 [2024-07-26 13:43:30.204444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.735 [2024-07-26 13:43:30.204820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.735 [2024-07-26 13:43:30.204835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.216304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.216665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.216681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.228115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.228479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.228495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.239932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.240311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.240326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.251811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.252187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.252206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.263610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.263838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.263852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.275410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.275784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.275799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.287221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.287467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.287481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.299087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.299335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.299350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.310950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.311275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.322752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.323044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.323059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.334629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.334956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.334973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.346466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.346835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.346850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.358280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.358718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.370087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.370445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.370461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.381910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.382156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.382173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.393764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.394108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.394128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.405592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.405975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.405990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.417369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.417718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.417733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.429194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.429533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.429549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.441026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.441369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.441385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.452814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.453194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.997 [2024-07-26 13:43:30.464630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:32.997 [2024-07-26 13:43:30.464982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.997 [2024-07-26 13:43:30.464998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.476483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.476862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.476879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.488245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.488573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.488589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.500092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.500448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.500464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.511893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.512261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.512276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.523713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.524073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.524088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.535523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.535924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.547307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.259 [2024-07-26 13:43:30.547629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.259 [2024-07-26 13:43:30.547645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.259 [2024-07-26 13:43:30.559100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.559471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.559487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.570909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.571158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.571174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.582755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.583111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.583126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.594569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.594973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.594989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.606389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.606778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.618158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.618518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.618533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.630004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.630260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.630275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.641811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.642140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.642156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.653584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.653939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.653955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.665395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.665762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.665778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.677444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.677784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.677800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.689272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.689618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.689634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.701075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.701499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.701518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.712937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.713269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.713285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.260 [2024-07-26 13:43:30.724782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.260 [2024-07-26 13:43:30.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.260 [2024-07-26 13:43:30.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.521 [2024-07-26 13:43:30.736609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.521 [2024-07-26 13:43:30.736961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.521 [2024-07-26 13:43:30.736977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.521 [2024-07-26 13:43:30.748475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.521 [2024-07-26 13:43:30.748857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.521 [2024-07-26 13:43:30.748873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.521 [2024-07-26 13:43:30.760275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.521 [2024-07-26 13:43:30.760643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.521 [2024-07-26 13:43:30.760659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.521 [2024-07-26 13:43:30.772066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.772518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.772533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 [2024-07-26 13:43:30.783881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.784257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.784272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 [2024-07-26 13:43:30.795721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.796048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.796063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 [2024-07-26 13:43:30.807565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.807860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.807876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 [2024-07-26 13:43:30.819391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.819752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.819768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 [2024-07-26 13:43:30.831187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2253d20) with pdu=0x2000190eaef0 00:32:33.522 [2024-07-26 13:43:30.831657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:33.522 [2024-07-26 13:43:30.831674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:33.522 00:32:33.522 Latency(us) 00:32:33.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.522 nvme0n1 : 2.01 21438.42 83.74 0.00 0.00 5960.12 2676.05 21954.56 00:32:33.522 =================================================================================================================== 00:32:33.522 Total : 21438.42 83.74 0.00 0.00 5960.12 2676.05 21954.56 00:32:33.522 0 00:32:33.522 13:43:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:33.522 13:43:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:33.522 13:43:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:33.522 | .driver_specific 00:32:33.522 | .nvme_error 00:32:33.522 | .status_code 00:32:33.522 | .command_transient_transport_error' 00:32:33.522 13:43:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:33.782 13:43:31 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:32:33.782 13:43:31 -- host/digest.sh@73 -- # killprocess 1176329 00:32:33.782 13:43:31 -- common/autotest_common.sh@926 -- # '[' -z 1176329 ']' 00:32:33.782 13:43:31 -- common/autotest_common.sh@930 -- # kill -0 1176329 00:32:33.782 13:43:31 -- common/autotest_common.sh@931 -- # uname 00:32:33.782 13:43:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:33.783 13:43:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1176329 00:32:33.783 13:43:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:33.783 13:43:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:33.783 13:43:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1176329' 00:32:33.783 killing process with pid 1176329 00:32:33.783 13:43:31 -- common/autotest_common.sh@945 -- # kill 1176329 00:32:33.783 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.783 00:32:33.783 Latency(us) 00:32:33.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.783 =================================================================================================================== 00:32:33.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.783 13:43:31 -- common/autotest_common.sh@950 -- # wait 1176329 00:32:33.783 13:43:31 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:33.783 13:43:31 -- host/digest.sh@54 -- # local rw bs qd 00:32:33.783 13:43:31 -- host/digest.sh@56 -- # rw=randwrite 00:32:33.783 13:43:31 -- host/digest.sh@56 -- # bs=131072 00:32:33.783 13:43:31 -- host/digest.sh@56 -- # qd=16 00:32:33.783 13:43:31 -- host/digest.sh@58 -- # bperfpid=1177025 00:32:33.783 13:43:31 -- host/digest.sh@60 -- # waitforlisten 1177025 /var/tmp/bperf.sock 00:32:33.783 13:43:31 -- common/autotest_common.sh@819 -- # '[' -z 1177025 ']' 00:32:33.783 13:43:31 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:33.783 13:43:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.783 13:43:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:33.783 13:43:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.783 13:43:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:33.783 13:43:31 -- common/autotest_common.sh@10 -- # set +x 00:32:33.783 [2024-07-26 13:43:31.225275] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:33.783 [2024-07-26 13:43:31.225331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177025 ] 00:32:33.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.783 Zero copy mechanism will not be used. 00:32:33.783 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.044 [2024-07-26 13:43:31.300392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.044 [2024-07-26 13:43:31.326868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.616 13:43:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:34.616 13:43:31 -- common/autotest_common.sh@852 -- # return 0 00:32:34.616 13:43:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.616 13:43:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.877 13:43:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:34.877 13:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.877 13:43:32 -- common/autotest_common.sh@10 -- # set +x 00:32:34.877 13:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.877 13:43:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.877 13:43:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.138 nvme0n1 00:32:35.138 13:43:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:35.138 13:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.138 13:43:32 -- common/autotest_common.sh@10 -- # set +x 00:32:35.138 13:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.138 13:43:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:35.138 13:43:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.138 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.138 Zero copy mechanism will not be used. 00:32:35.138 Running I/O for 2 seconds... 00:32:35.138 [2024-07-26 13:43:32.516491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.516733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.516759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.531396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.531912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.531932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.547696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.548031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.561780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.562123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.562139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.577167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.577364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.577380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.592171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.138 [2024-07-26 13:43:32.592610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.138 [2024-07-26 13:43:32.592626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.138 [2024-07-26 13:43:32.607841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.139 [2024-07-26 13:43:32.608190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.139 [2024-07-26 13:43:32.608213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.400 [2024-07-26 13:43:32.623146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.400 [2024-07-26 13:43:32.623627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.400 [2024-07-26 13:43:32.623644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.400 [2024-07-26 13:43:32.637987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.400 [2024-07-26 13:43:32.638299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.400 [2024-07-26 13:43:32.638318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.400 [2024-07-26 13:43:32.652332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.400 [2024-07-26 13:43:32.652589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.400 [2024-07-26 13:43:32.652604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.400 [2024-07-26 13:43:32.666491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.400 [2024-07-26 13:43:32.666937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.666957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.681812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.682031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.682046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.696784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.697074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.697089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.712036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.712445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.712462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.727355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.727602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.727619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.742206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.742680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.757782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.758299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.758315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.773594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.773941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.773956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.789819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.790319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.806347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.806699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.806716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.822663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.823061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.823077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.836277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.836550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.836566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.849840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.850017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.850032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.401 [2024-07-26 13:43:32.864852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.401 [2024-07-26 13:43:32.865206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.401 [2024-07-26 13:43:32.865223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.878995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.879218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.879233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.890622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.890946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.902397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.902579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.902594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.916080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.916271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.916287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.928927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.929248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.929265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.943146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.943537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.943553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.959034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.959556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.959572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.973942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.974389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.974406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.986346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.986656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.986672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:32.999624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:32.999965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:32.999982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.014995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.015232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:33.015248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.029352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.029697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:33.029714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.044892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.045348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:33.045368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.059525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.060026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:33.060042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.074645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.074989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.663 [2024-07-26 13:43:33.075005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.663 [2024-07-26 13:43:33.089544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.663 [2024-07-26 13:43:33.090022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.664 [2024-07-26 13:43:33.090038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.664 [2024-07-26 13:43:33.105571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.664 [2024-07-26 13:43:33.105805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.664 [2024-07-26 13:43:33.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.664 [2024-07-26 13:43:33.120182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.664 [2024-07-26 13:43:33.120477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.664 [2024-07-26 13:43:33.120492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.664 [2024-07-26 13:43:33.134371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.664 [2024-07-26 13:43:33.134606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.664 [2024-07-26 13:43:33.134621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.148177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.148550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.148567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.163169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.163695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.163711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.178934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.179460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.179477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.195768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.196104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.196120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.211180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.211730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.211747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.226259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.226700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.226717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.240055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.240426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.240443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.254048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.254429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.254446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.267812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.268018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.268034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.281554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.281965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.281982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.297332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.297716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.297732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.313040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.313334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.313351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.327150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.327509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.327527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.341495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.341809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.341825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.355144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.355544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.355560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.370331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.370586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.925 [2024-07-26 13:43:33.385445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:35.925 [2024-07-26 13:43:33.385950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.925 [2024-07-26 13:43:33.385966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.400550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.400904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.414478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.414822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.428701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.429099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.444170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.444630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.444647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.458905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.459236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.459252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.186 [2024-07-26 13:43:33.475059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.186 [2024-07-26 13:43:33.475538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.186 [2024-07-26 13:43:33.475554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.489284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.489572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.489588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.504029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.504300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.519673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.519979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.519994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.534586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.534807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.534823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.547347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.547783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.547800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.561739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.562198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.562218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.576765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.577143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.577159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.592272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.592711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.592726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.608142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.608670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.623240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.623591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.623607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.639410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.639778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.639795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.187 [2024-07-26 13:43:33.654786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.187 [2024-07-26 13:43:33.655098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.187 [2024-07-26 13:43:33.655115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.669265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.669488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.684987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.685384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.685404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.699834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.700219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.700235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.714108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.714583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.714599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.728699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.729222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.729239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.742760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.743270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.743287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.758241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.758743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.758761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.773772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.774038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.774054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.788196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.788525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.788542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.803086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.803321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.803336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.816363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.816758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.816775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.830498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.830784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.830801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.845638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.858896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.859507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.859524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.873119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.873559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.873575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.888767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.889229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.889245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.902208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.449 [2024-07-26 13:43:33.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.449 [2024-07-26 13:43:33.902635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.449 [2024-07-26 13:43:33.917331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.450 [2024-07-26 13:43:33.917745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.450 [2024-07-26 13:43:33.917761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.711 [2024-07-26 13:43:33.931501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.711 [2024-07-26 13:43:33.931867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.711 [2024-07-26 13:43:33.931883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.711 [2024-07-26 13:43:33.945815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.711 [2024-07-26 13:43:33.946285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:33.946301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:33.961513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:33.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:33.961847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:33.975841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:33.976333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:33.976349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:33.990360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:33.990738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:33.990754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.004701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.005116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.005132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.019553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.020017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.033665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.034063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.048799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.049191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.049211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.063209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.063480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.076887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.077182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.077199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.091521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.091830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.091846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.107161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.107409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.122002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.122186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.122205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.135272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.135486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.148395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.148684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.160603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.160853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.160870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.712 [2024-07-26 13:43:34.173005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.712 [2024-07-26 13:43:34.173244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.712 [2024-07-26 13:43:34.173260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.974 [2024-07-26 13:43:34.187674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.974 [2024-07-26 13:43:34.188004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.974 [2024-07-26 13:43:34.188020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.974 [2024-07-26 13:43:34.201209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.974 [2024-07-26 13:43:34.201418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.974 [2024-07-26 13:43:34.201434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.974 [2024-07-26 13:43:34.214917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.974 [2024-07-26 13:43:34.215322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.974 [2024-07-26 13:43:34.215338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.974 [2024-07-26 13:43:34.228404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.974 [2024-07-26 13:43:34.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.974 [2024-07-26 13:43:34.228669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.241283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.241688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.255447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.255785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.255801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.268657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.269101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.269117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.284071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.284326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.284342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.298587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.298982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.313030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.313376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.313392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.328248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.328526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.328542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.341861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.342149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.342165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.354923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.355196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.355217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.367783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.368050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.381677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.382073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.395839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.396089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.409732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.410154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.410170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.424021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.424426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.424444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.975 [2024-07-26 13:43:34.438812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:36.975 [2024-07-26 13:43:34.439178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.975 [2024-07-26 13:43:34.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.237 [2024-07-26 13:43:34.452438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:37.237 [2024-07-26 13:43:34.452688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.237 [2024-07-26 13:43:34.452703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.237 [2024-07-26 13:43:34.467541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:37.237 [2024-07-26 13:43:34.467965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.237 [2024-07-26 13:43:34.467981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.237 [2024-07-26 13:43:34.480351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:37.237 [2024-07-26 13:43:34.480646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.237 [2024-07-26 13:43:34.480662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.237 [2024-07-26 13:43:34.493980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:37.237 [2024-07-26 13:43:34.494469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.237 [2024-07-26 13:43:34.494486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.237 [2024-07-26 13:43:34.506762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2254060) with pdu=0x2000190fef90 00:32:37.237 [2024-07-26 13:43:34.507061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.237 [2024-07-26 13:43:34.507077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.237 00:32:37.237 Latency(us) 00:32:37.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.237 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:37.237 nvme0n1 : 2.01 2133.95 266.74 0.00 0.00 7484.30 5160.96 16602.45 00:32:37.237 =================================================================================================================== 00:32:37.237 Total : 2133.95 266.74 0.00 0.00 7484.30 5160.96 16602.45 00:32:37.237 0 00:32:37.237 13:43:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:37.237 13:43:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:37.237 13:43:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:37.237 | .driver_specific 00:32:37.237 | .nvme_error 00:32:37.237 | .status_code 00:32:37.237 | .command_transient_transport_error' 00:32:37.237 13:43:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:37.237 13:43:34 -- host/digest.sh@71 -- # (( 138 > 0 )) 00:32:37.237 13:43:34 -- host/digest.sh@73 -- # killprocess 1177025 00:32:37.237 13:43:34 -- common/autotest_common.sh@926 -- # '[' -z 1177025 ']' 00:32:37.237 13:43:34 -- common/autotest_common.sh@930 -- # kill -0 1177025 00:32:37.237 13:43:34 -- common/autotest_common.sh@931 -- # uname 00:32:37.237 13:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:37.237 13:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1177025 00:32:37.498 13:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:37.498 13:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:37.498 13:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1177025' 00:32:37.498 killing process with pid 1177025 00:32:37.498 13:43:34 -- common/autotest_common.sh@945 -- # kill 1177025 00:32:37.498 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.498 00:32:37.498 Latency(us) 00:32:37.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.498 =================================================================================================================== 00:32:37.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.498 13:43:34 -- common/autotest_common.sh@950 -- # wait 1177025 00:32:37.498 13:43:34 -- host/digest.sh@115 -- # killprocess 1174593 00:32:37.498 13:43:34 -- common/autotest_common.sh@926 -- # '[' -z 1174593 ']' 00:32:37.498 13:43:34 -- common/autotest_common.sh@930 -- # kill -0 1174593 00:32:37.498 13:43:34 -- common/autotest_common.sh@931 -- # uname 00:32:37.498 13:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:37.498 13:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1174593 00:32:37.498 13:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:37.499 13:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:37.499 13:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1174593' 00:32:37.499 killing process with pid 1174593 00:32:37.499 13:43:34 -- common/autotest_common.sh@945 -- # kill 1174593 00:32:37.499 13:43:34 -- common/autotest_common.sh@950 -- # wait 1174593 00:32:37.760 00:32:37.760 real 0m15.917s 00:32:37.760 user 0m31.687s 00:32:37.760 sys 0m2.853s 00:32:37.760 13:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.761 13:43:35 -- common/autotest_common.sh@10 -- # set +x 00:32:37.761 ************************************ 00:32:37.761 END TEST nvmf_digest_error 00:32:37.761 ************************************ 00:32:37.761 13:43:35 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:37.761 13:43:35 -- host/digest.sh@139 -- # nvmftestfini 00:32:37.761 13:43:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:37.761 13:43:35 -- nvmf/common.sh@116 -- # sync 00:32:37.761 13:43:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:37.761 13:43:35 -- nvmf/common.sh@119 -- # set +e 00:32:37.761 13:43:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:37.761 13:43:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:37.761 rmmod nvme_tcp 00:32:37.761 rmmod nvme_fabrics 00:32:37.761 rmmod nvme_keyring 00:32:37.761 13:43:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:37.761 13:43:35 -- nvmf/common.sh@123 -- # set -e 00:32:37.761 13:43:35 -- nvmf/common.sh@124 -- # return 0 00:32:37.761 13:43:35 -- nvmf/common.sh@477 -- # '[' -n 1174593 ']' 00:32:37.761 13:43:35 -- nvmf/common.sh@478 -- # killprocess 1174593 00:32:37.761 13:43:35 -- common/autotest_common.sh@926 -- # '[' -z 1174593 ']' 00:32:37.761 13:43:35 -- common/autotest_common.sh@930 -- # kill -0 1174593 00:32:37.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1174593) - No such process 00:32:37.761 13:43:35 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1174593 is not found' 00:32:37.761 Process with pid 1174593 is not found 00:32:37.761 13:43:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:37.761 13:43:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:37.761 13:43:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:37.761 13:43:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.761 13:43:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:37.761 13:43:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.761 13:43:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.761 13:43:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.729 13:43:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:39.729 00:32:39.729 real 0m41.169s 00:32:39.729 user 1m5.160s 00:32:39.729 sys 0m11.136s 00:32:39.729 13:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.729 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.729 ************************************ 00:32:39.729 END TEST nvmf_digest 00:32:39.729 ************************************ 00:32:39.729 13:43:37 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:39.729 13:43:37 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:32:39.729 13:43:37 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:32:39.729 13:43:37 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:39.729 13:43:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:39.729 13:43:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:39.729 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:32:39.990 ************************************ 00:32:39.990 START TEST nvmf_bdevperf 00:32:39.990 ************************************ 00:32:39.990 13:43:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:39.990 * Looking for test storage... 00:32:39.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:39.990 13:43:37 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.990 13:43:37 -- nvmf/common.sh@7 -- # uname -s 00:32:39.990 13:43:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.990 13:43:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.990 13:43:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.990 13:43:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.990 13:43:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.990 13:43:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.990 13:43:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.990 13:43:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.990 13:43:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.990 13:43:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.990 13:43:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.990 13:43:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.990 13:43:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.990 13:43:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.990 13:43:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.990 13:43:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.990 13:43:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.990 13:43:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.990 13:43:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.990 13:43:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.990 13:43:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.990 13:43:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.990 13:43:37 -- paths/export.sh@5 -- # export PATH 00:32:39.990 13:43:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.990 13:43:37 -- nvmf/common.sh@46 -- # : 0 00:32:39.990 13:43:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:39.990 13:43:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:39.990 13:43:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:39.990 13:43:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.990 13:43:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.990 13:43:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:39.990 13:43:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:39.990 13:43:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:39.990 13:43:37 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.990 13:43:37 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.990 13:43:37 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:39.990 13:43:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:39.990 13:43:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.990 13:43:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:39.990 13:43:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:39.990 13:43:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:39.990 13:43:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.990 13:43:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:39.990 13:43:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.990 13:43:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:39.990 13:43:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:39.990 13:43:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:39.990 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:32:48.134 13:43:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:48.134 13:43:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:48.134 13:43:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:48.134 13:43:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:48.134 13:43:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:48.134 13:43:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:48.134 13:43:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:48.134 13:43:44 -- nvmf/common.sh@294 -- # net_devs=() 00:32:48.134 13:43:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:48.134 13:43:44 -- nvmf/common.sh@295 -- # e810=() 00:32:48.134 13:43:44 -- nvmf/common.sh@295 -- # local -ga e810 00:32:48.134 13:43:44 -- nvmf/common.sh@296 -- # x722=() 00:32:48.134 13:43:44 -- nvmf/common.sh@296 -- # local -ga x722 00:32:48.134 13:43:44 -- nvmf/common.sh@297 -- # mlx=() 00:32:48.134 13:43:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:48.134 13:43:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.134 13:43:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:48.134 13:43:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:48.134 13:43:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:48.134 13:43:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:48.134 13:43:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:48.134 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:48.134 13:43:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:48.134 13:43:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:48.134 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:48.134 13:43:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:48.134 13:43:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:48.134 13:43:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:48.134 13:43:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.134 13:43:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:48.134 13:43:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.134 13:43:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:48.134 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:48.134 13:43:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.134 13:43:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:48.134 13:43:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.134 13:43:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:48.135 13:43:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.135 13:43:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:48.135 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:48.135 13:43:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.135 13:43:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:48.135 13:43:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:48.135 13:43:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:48.135 13:43:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:48.135 13:43:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:48.135 13:43:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.135 13:43:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.135 13:43:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.135 13:43:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:48.135 13:43:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.135 13:43:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.135 13:43:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:48.135 13:43:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.135 13:43:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.135 13:43:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:48.135 13:43:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:48.135 13:43:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.135 13:43:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.135 13:43:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.135 13:43:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.135 13:43:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:48.135 13:43:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.135 13:43:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.135 13:43:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.135 13:43:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:48.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:32:48.135 00:32:48.135 --- 10.0.0.2 ping statistics --- 00:32:48.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.135 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:32:48.135 13:43:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:32:48.135 00:32:48.135 --- 10.0.0.1 ping statistics --- 00:32:48.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.135 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:32:48.135 13:43:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.135 13:43:44 -- nvmf/common.sh@410 -- # return 0 00:32:48.135 13:43:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:48.135 13:43:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.135 13:43:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:48.135 13:43:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:48.135 13:43:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.135 13:43:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:48.135 13:43:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:48.135 13:43:44 -- host/bdevperf.sh@25 -- # tgt_init 00:32:48.135 13:43:44 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:48.135 13:43:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:48.135 13:43:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:48.135 13:43:44 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 13:43:44 -- nvmf/common.sh@469 -- # nvmfpid=1181741 00:32:48.135 13:43:44 -- nvmf/common.sh@470 -- # waitforlisten 1181741 00:32:48.135 13:43:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:48.135 13:43:44 -- common/autotest_common.sh@819 -- # '[' -z 1181741 ']' 00:32:48.135 13:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.135 13:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:48.135 13:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.135 13:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:48.135 13:43:44 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 [2024-07-26 13:43:44.533967] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:48.135 [2024-07-26 13:43:44.534018] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.135 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.135 [2024-07-26 13:43:44.617562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:48.135 [2024-07-26 13:43:44.653048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:48.135 [2024-07-26 13:43:44.653195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.135 [2024-07-26 13:43:44.653215] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.135 [2024-07-26 13:43:44.653222] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.135 [2024-07-26 13:43:44.653338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.135 [2024-07-26 13:43:44.653615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.135 [2024-07-26 13:43:44.653616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.135 13:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:48.135 13:43:45 -- common/autotest_common.sh@852 -- # return 0 00:32:48.135 13:43:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:48.135 13:43:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 13:43:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.135 13:43:45 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.135 13:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 [2024-07-26 13:43:45.340718] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.135 13:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.135 13:43:45 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.135 13:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 Malloc0 00:32:48.135 13:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.135 13:43:45 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.135 13:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 13:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.135 13:43:45 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.135 13:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 13:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.135 13:43:45 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.135 13:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.135 13:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:48.135 [2024-07-26 13:43:45.418232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.135 13:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.135 13:43:45 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:48.135 13:43:45 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:48.135 13:43:45 -- nvmf/common.sh@520 -- # config=() 00:32:48.135 13:43:45 -- nvmf/common.sh@520 -- # local subsystem config 00:32:48.135 13:43:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:48.135 13:43:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:48.135 { 00:32:48.135 "params": { 00:32:48.135 "name": "Nvme$subsystem", 00:32:48.135 "trtype": "$TEST_TRANSPORT", 00:32:48.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:48.135 "adrfam": "ipv4", 00:32:48.135 "trsvcid": "$NVMF_PORT", 00:32:48.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:48.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:48.135 "hdgst": ${hdgst:-false}, 00:32:48.135 "ddgst": ${ddgst:-false} 00:32:48.135 }, 00:32:48.135 "method": "bdev_nvme_attach_controller" 00:32:48.135 } 00:32:48.135 EOF 00:32:48.135 )") 00:32:48.135 13:43:45 -- nvmf/common.sh@542 -- # cat 00:32:48.135 13:43:45 -- nvmf/common.sh@544 -- # jq . 00:32:48.135 13:43:45 -- nvmf/common.sh@545 -- # IFS=, 00:32:48.135 13:43:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:48.135 "params": { 00:32:48.135 "name": "Nvme1", 00:32:48.135 "trtype": "tcp", 00:32:48.135 "traddr": "10.0.0.2", 00:32:48.135 "adrfam": "ipv4", 00:32:48.135 "trsvcid": "4420", 00:32:48.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:48.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:48.135 "hdgst": false, 00:32:48.135 "ddgst": false 00:32:48.135 }, 00:32:48.135 "method": "bdev_nvme_attach_controller" 00:32:48.135 }' 00:32:48.135 [2024-07-26 13:43:45.470448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:48.135 [2024-07-26 13:43:45.470495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182096 ] 00:32:48.135 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.135 [2024-07-26 13:43:45.528654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.136 [2024-07-26 13:43:45.557528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.397 Running I/O for 1 seconds... 00:32:49.783 00:32:49.783 Latency(us) 00:32:49.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.783 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:49.783 Verification LBA range: start 0x0 length 0x4000 00:32:49.783 Nvme1n1 : 1.01 13433.85 52.48 0.00 0.00 9487.29 1460.91 21517.65 00:32:49.783 =================================================================================================================== 00:32:49.783 Total : 13433.85 52.48 0.00 0.00 9487.29 1460.91 21517.65 00:32:49.783 13:43:46 -- host/bdevperf.sh@30 -- # bdevperfpid=1182331 00:32:49.783 13:43:46 -- host/bdevperf.sh@32 -- # sleep 3 00:32:49.783 13:43:46 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:49.783 13:43:46 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:49.783 13:43:46 -- nvmf/common.sh@520 -- # config=() 00:32:49.783 13:43:46 -- nvmf/common.sh@520 -- # local subsystem config 00:32:49.783 13:43:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:49.783 13:43:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:49.783 { 00:32:49.783 "params": { 00:32:49.783 "name": "Nvme$subsystem", 00:32:49.783 "trtype": "$TEST_TRANSPORT", 00:32:49.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.783 "adrfam": "ipv4", 00:32:49.783 "trsvcid": "$NVMF_PORT", 00:32:49.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.783 "hdgst": ${hdgst:-false}, 00:32:49.783 "ddgst": ${ddgst:-false} 00:32:49.783 }, 00:32:49.783 "method": "bdev_nvme_attach_controller" 00:32:49.783 } 00:32:49.783 EOF 00:32:49.783 )") 00:32:49.783 13:43:46 -- nvmf/common.sh@542 -- # cat 00:32:49.783 13:43:46 -- nvmf/common.sh@544 -- # jq . 00:32:49.783 13:43:46 -- nvmf/common.sh@545 -- # IFS=, 00:32:49.783 13:43:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:49.783 "params": { 00:32:49.783 "name": "Nvme1", 00:32:49.783 "trtype": "tcp", 00:32:49.783 "traddr": "10.0.0.2", 00:32:49.783 "adrfam": "ipv4", 00:32:49.783 "trsvcid": "4420", 00:32:49.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:49.783 "hdgst": false, 00:32:49.783 "ddgst": false 00:32:49.783 }, 00:32:49.783 "method": "bdev_nvme_attach_controller" 00:32:49.783 }' 00:32:49.784 [2024-07-26 13:43:46.983880] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:49.784 [2024-07-26 13:43:46.983959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182331 ] 00:32:49.784 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.784 [2024-07-26 13:43:47.049666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.784 [2024-07-26 13:43:47.077710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.784 Running I/O for 15 seconds... 00:32:53.094 13:43:49 -- host/bdevperf.sh@33 -- # kill -9 1181741 00:32:53.094 13:43:49 -- host/bdevperf.sh@35 -- # sleep 3 00:32:53.094 [2024-07-26 13:43:49.952992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.094 [2024-07-26 13:43:49.953550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.094 [2024-07-26 13:43:49.953592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.094 [2024-07-26 13:43:49.953599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.953833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.953987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.953994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.095 [2024-07-26 13:43:49.954027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.095 [2024-07-26 13:43:49.954285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.095 [2024-07-26 13:43:49.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.096 [2024-07-26 13:43:49.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.096 [2024-07-26 13:43:49.954900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.096 [2024-07-26 13:43:49.954907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.954916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.954924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.954933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.954940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.954949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.954960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.954970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.954977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.954986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.954994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:53.097 [2024-07-26 13:43:49.955157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.097 [2024-07-26 13:43:49.955293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1c50 is same with the state(5) to be set 00:32:53.097 [2024-07-26 13:43:49.955311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:53.097 [2024-07-26 13:43:49.955317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:53.097 [2024-07-26 13:43:49.955323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:32:53.097 [2024-07-26 13:43:49.955331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:53.097 [2024-07-26 13:43:49.955369] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c1c50 was disconnected and freed. reset controller. 00:32:53.097 [2024-07-26 13:43:49.957793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.097 [2024-07-26 13:43:49.957841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.097 [2024-07-26 13:43:49.958712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.959426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.959462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.097 [2024-07-26 13:43:49.959473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.097 [2024-07-26 13:43:49.959680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.097 [2024-07-26 13:43:49.959810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.097 [2024-07-26 13:43:49.959818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.097 [2024-07-26 13:43:49.959827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.097 [2024-07-26 13:43:49.962041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.097 [2024-07-26 13:43:49.970532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.097 [2024-07-26 13:43:49.971173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.971790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.971827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.097 [2024-07-26 13:43:49.971838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.097 [2024-07-26 13:43:49.972056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.097 [2024-07-26 13:43:49.972184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.097 [2024-07-26 13:43:49.972193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.097 [2024-07-26 13:43:49.972210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.097 [2024-07-26 13:43:49.974662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.097 [2024-07-26 13:43:49.983131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.097 [2024-07-26 13:43:49.983809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.984505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.097 [2024-07-26 13:43:49.984542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.097 [2024-07-26 13:43:49.984552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.097 [2024-07-26 13:43:49.984753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.097 [2024-07-26 13:43:49.984863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.097 [2024-07-26 13:43:49.984873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:49.984882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:49.987279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:49.995495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:49.996424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:49.996911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:49.996923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:49.996932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:49.997081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:49.997217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:49.997226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:49.997233] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:49.999636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.007782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.008644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.009447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.009485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.009498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.009663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.009810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.009819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.009827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.012290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.020242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.020874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.021403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.021440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.021451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.021651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.021816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.021825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.021833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.024328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.032726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.033456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.033952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.033965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.033975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.034100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.034252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.034265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.034273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.036447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.045401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.045902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.046408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.046446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.046458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.046587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.046770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.046780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.046788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.049174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.057831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.058414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.059023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.059036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.059046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.059216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.059308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.059316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.059324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.061684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.070341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.070952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.071574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.098 [2024-07-26 13:43:50.071610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.098 [2024-07-26 13:43:50.071622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.098 [2024-07-26 13:43:50.071769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.098 [2024-07-26 13:43:50.071970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.098 [2024-07-26 13:43:50.071978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.098 [2024-07-26 13:43:50.071992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.098 [2024-07-26 13:43:50.074112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.098 [2024-07-26 13:43:50.082673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.098 [2024-07-26 13:43:50.083453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.084525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.084550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.084560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.084704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.084888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.084897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.084905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.087207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.095232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.095818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.096327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.096338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.096346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.096491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.096634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.096642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.096650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.098977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.107710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.108347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.108858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.108869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.108876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.109001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.109145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.109153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.109160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.111494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.120194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.120969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.121527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.121564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.121576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.121705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.121851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.121859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.121867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.124022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.132589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.133132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.133678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.133689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.133697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.133860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.134039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.134047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.134054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.136244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.144974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.145619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.146081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.146090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.146097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.146227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.146425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.146433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.146440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.148740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.157637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.158416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.158901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.158914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.158923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.159085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.159239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.159248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.159256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.161693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.170088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.170769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.171384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.171420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.171430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.171611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.171795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.171803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.171811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.174088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.099 [2024-07-26 13:43:50.182679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.099 [2024-07-26 13:43:50.183369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.183857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.099 [2024-07-26 13:43:50.183868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.099 [2024-07-26 13:43:50.183876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.099 [2024-07-26 13:43:50.184039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.099 [2024-07-26 13:43:50.184200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.099 [2024-07-26 13:43:50.184214] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.099 [2024-07-26 13:43:50.184221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.099 [2024-07-26 13:43:50.186391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.195182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.195836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.196291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.196301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.196309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.196469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.196593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.196601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.196608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.198978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.207675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.208338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.208762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.208771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.208778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.208940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.209064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.209072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.209079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.211363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.220025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.220652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.221026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.221038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.221045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.221190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.221309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.221318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.221325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.223618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.232622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.233234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.233560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.233569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.233576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.233720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.233844] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.233852] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.233858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.236162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.245048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.245676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.246142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.246151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.246158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.246310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.246452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.246460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.246468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.248739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.257521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.258135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.258633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.258644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.258651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.258811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.258973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.258981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.258988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.261148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.270137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.270796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.271297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.271308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.271318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.271462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.271605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.271612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.271619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.273841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.282389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.283058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.283616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.283652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.283663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.283788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.283972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.283981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.283989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.100 [2024-07-26 13:43:50.286252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.100 [2024-07-26 13:43:50.294942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.100 [2024-07-26 13:43:50.295578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.296038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.100 [2024-07-26 13:43:50.296048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.100 [2024-07-26 13:43:50.296056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.100 [2024-07-26 13:43:50.296223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.100 [2024-07-26 13:43:50.296367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.100 [2024-07-26 13:43:50.296375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.100 [2024-07-26 13:43:50.296382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.298533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.307308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.307819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.308321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.308331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.308338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.308449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.308537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.308545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.308552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.310650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.319762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.320414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.320879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.320888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.320895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.321057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.321206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.321218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.321230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.323629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.332146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.332756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.333131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.333141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.333148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.333321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.333483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.333491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.333498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.335512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.344653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.345392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.345788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.345801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.345810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.345976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.346067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.346083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.346091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.348412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.357242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.357899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.358487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.358523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.358535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.358700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.358809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.358819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.358826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.361163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.369649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.370259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.370765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.370776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.370783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.370945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.371070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.371078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.371085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.373439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.382012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.382526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.383034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.383043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.383051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.383195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.383348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.383356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.383363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.385698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.394515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.395171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.395673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.395683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.395690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.395870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.395995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.396003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.101 [2024-07-26 13:43:50.396009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.101 [2024-07-26 13:43:50.398351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.101 [2024-07-26 13:43:50.406777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.101 [2024-07-26 13:43:50.407529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.407926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.101 [2024-07-26 13:43:50.407939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.101 [2024-07-26 13:43:50.407948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.101 [2024-07-26 13:43:50.408129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.101 [2024-07-26 13:43:50.408285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.101 [2024-07-26 13:43:50.408294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.408302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.410470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.419304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.419869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.420386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.420423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.420433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.420595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.420742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.420752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.420763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.423100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.431967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.432612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.433117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.433128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.433135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.433339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.433519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.433527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.433534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.435901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.444437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.445023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.445579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.445616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.445626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.445789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.445899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.445907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.445915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.448045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.456763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.457501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.457986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.457999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.458008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.458115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.458232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.458240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.458252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.460452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.469129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.469800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.470127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.470136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.470144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.470312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.470474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.470482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.470489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.472857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.481538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.482185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.482735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.482772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.482783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.482963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.483129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.483137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.483145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.485419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.494050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.494762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.495895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.495918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.495926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.496077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.496212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.496221] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.496228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.498484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.506608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.507168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.507730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.507767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.507778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.507959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.508087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.102 [2024-07-26 13:43:50.508096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.102 [2024-07-26 13:43:50.508104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.102 [2024-07-26 13:43:50.510481] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.102 [2024-07-26 13:43:50.518904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.102 [2024-07-26 13:43:50.519481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.519973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.102 [2024-07-26 13:43:50.519986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.102 [2024-07-26 13:43:50.519995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.102 [2024-07-26 13:43:50.520102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.102 [2024-07-26 13:43:50.520239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.103 [2024-07-26 13:43:50.520248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.103 [2024-07-26 13:43:50.520256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.103 [2024-07-26 13:43:50.522556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.103 [2024-07-26 13:43:50.531538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.103 [2024-07-26 13:43:50.532181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.532773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.532810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.103 [2024-07-26 13:43:50.532820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.103 [2024-07-26 13:43:50.532982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.103 [2024-07-26 13:43:50.533092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.103 [2024-07-26 13:43:50.533101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.103 [2024-07-26 13:43:50.533109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.103 [2024-07-26 13:43:50.535420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.103 [2024-07-26 13:43:50.544063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.103 [2024-07-26 13:43:50.544607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.545106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.545116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.103 [2024-07-26 13:43:50.545124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.103 [2024-07-26 13:43:50.545256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.103 [2024-07-26 13:43:50.545400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.103 [2024-07-26 13:43:50.545408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.103 [2024-07-26 13:43:50.545415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.103 [2024-07-26 13:43:50.547530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.103 [2024-07-26 13:43:50.556625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.103 [2024-07-26 13:43:50.557169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.557744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.103 [2024-07-26 13:43:50.557780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.103 [2024-07-26 13:43:50.557791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.103 [2024-07-26 13:43:50.557916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.103 [2024-07-26 13:43:50.558062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.103 [2024-07-26 13:43:50.558071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.103 [2024-07-26 13:43:50.558078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.103 [2024-07-26 13:43:50.560164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.568929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.569668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.570153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.570166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.368 [2024-07-26 13:43:50.570175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.368 [2024-07-26 13:43:50.570325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.368 [2024-07-26 13:43:50.570454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.368 [2024-07-26 13:43:50.570462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.368 [2024-07-26 13:43:50.570470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.368 [2024-07-26 13:43:50.572649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.581480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.582094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.582642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.582679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.368 [2024-07-26 13:43:50.582689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.368 [2024-07-26 13:43:50.582869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.368 [2024-07-26 13:43:50.582998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.368 [2024-07-26 13:43:50.583006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.368 [2024-07-26 13:43:50.583014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.368 [2024-07-26 13:43:50.585343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.594131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.594868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.595419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.595455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.368 [2024-07-26 13:43:50.595466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.368 [2024-07-26 13:43:50.595645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.368 [2024-07-26 13:43:50.595792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.368 [2024-07-26 13:43:50.595801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.368 [2024-07-26 13:43:50.595809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.368 [2024-07-26 13:43:50.598052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.606658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.607431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.607920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.607932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.368 [2024-07-26 13:43:50.607942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.368 [2024-07-26 13:43:50.608067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.368 [2024-07-26 13:43:50.608239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.368 [2024-07-26 13:43:50.608248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.368 [2024-07-26 13:43:50.608256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.368 [2024-07-26 13:43:50.610442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.619189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.619841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.620434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.620471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.368 [2024-07-26 13:43:50.620486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.368 [2024-07-26 13:43:50.620685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.368 [2024-07-26 13:43:50.620795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.368 [2024-07-26 13:43:50.620803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.368 [2024-07-26 13:43:50.620811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.368 [2024-07-26 13:43:50.623016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.368 [2024-07-26 13:43:50.631858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.368 [2024-07-26 13:43:50.632496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.632958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.368 [2024-07-26 13:43:50.632968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.632976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.633138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.633306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.633315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.633322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.635541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.644510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.644971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.645516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.645553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.645564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.645690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.645835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.645844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.645852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.648086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.657032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.657680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.658143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.658153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.658164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.658296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.658440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.658448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.658455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.660625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.669628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.670158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.670619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.670656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.670666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.671002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.671270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.671283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.671291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.673644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.681861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.682618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.683158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.683172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.683181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.683358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.683506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.683514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.683521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.685781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.694336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.694937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.695488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.695525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.695536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.695722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.695868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.695877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.695884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.698226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.706926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.707486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.707951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.707961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.707968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.708149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.708317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.708325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.708332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.710455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.719452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.720058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.720615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.720652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.720662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.720843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.721008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.721017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.721024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.723333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.731812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.732527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.733056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.733069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.369 [2024-07-26 13:43:50.733079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.369 [2024-07-26 13:43:50.733248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.369 [2024-07-26 13:43:50.733361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.369 [2024-07-26 13:43:50.733370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.369 [2024-07-26 13:43:50.733377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.369 [2024-07-26 13:43:50.735627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.369 [2024-07-26 13:43:50.744384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.369 [2024-07-26 13:43:50.744974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.369 [2024-07-26 13:43:50.745559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.745596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.745607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.745769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.745915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.745924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.745931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.748149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.756845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.757500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.757851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.757861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.757869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.758013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.758157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.758165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.758172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.760514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.769561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.770211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.770797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.770834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.770844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.771006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.771133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.771146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.771154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.773478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.782181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.782757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.783220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.783235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.783244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.783388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.783496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.783504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.783511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.785763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.794487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.795139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.795672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.795683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.795691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.795816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.795960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.795968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.795975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.798153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.806695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.807447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.807936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.807949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.807958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.808121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.808313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.808322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.808334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.810505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.819245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.819644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.820023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.820034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.820042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.820232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.820394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.820402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.820410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.822707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.370 [2024-07-26 13:43:50.831940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.370 [2024-07-26 13:43:50.832435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.832895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.370 [2024-07-26 13:43:50.832905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.370 [2024-07-26 13:43:50.832912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.370 [2024-07-26 13:43:50.833093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.370 [2024-07-26 13:43:50.833241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.370 [2024-07-26 13:43:50.833249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.370 [2024-07-26 13:43:50.833256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.370 [2024-07-26 13:43:50.835518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.633 [2024-07-26 13:43:50.844198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.633 [2024-07-26 13:43:50.844948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.845432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.845447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.633 [2024-07-26 13:43:50.845456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.633 [2024-07-26 13:43:50.845582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.633 [2024-07-26 13:43:50.845746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.633 [2024-07-26 13:43:50.845755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.633 [2024-07-26 13:43:50.845762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.633 [2024-07-26 13:43:50.848004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.633 [2024-07-26 13:43:50.856758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.633 [2024-07-26 13:43:50.857464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.857948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.857960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.633 [2024-07-26 13:43:50.857969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.633 [2024-07-26 13:43:50.858150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.633 [2024-07-26 13:43:50.858304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.633 [2024-07-26 13:43:50.858313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.633 [2024-07-26 13:43:50.858321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.633 [2024-07-26 13:43:50.860666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.633 [2024-07-26 13:43:50.869274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.633 [2024-07-26 13:43:50.870002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.870484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.870500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.633 [2024-07-26 13:43:50.870509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.633 [2024-07-26 13:43:50.870634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.633 [2024-07-26 13:43:50.870798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.633 [2024-07-26 13:43:50.870806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.633 [2024-07-26 13:43:50.870814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.633 [2024-07-26 13:43:50.873053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.633 [2024-07-26 13:43:50.881743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.633 [2024-07-26 13:43:50.882457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.882946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.633 [2024-07-26 13:43:50.882959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.633 [2024-07-26 13:43:50.882968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.633 [2024-07-26 13:43:50.883093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.883227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.883236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.883244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.885573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.894211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.894866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.895432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.895468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.895479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.895660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.895788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.895796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.895803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.897987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.906716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.907324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.907655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.907664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.907672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.907798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.907960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.907967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.907974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.910138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.918968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.919714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.920198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.920225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.920235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.920397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.920562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.920570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.920577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.922818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.931467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.932082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.932455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.932466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.932474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.932636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.932743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.932751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.932758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.935303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.943784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.944301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.944802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.944814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.944824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.945042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.945133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.945141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.945148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.947366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.956327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.957134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.957661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.957675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.957684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.957847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.957994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.958002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.958010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.960253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.634 [2024-07-26 13:43:50.968906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.634 [2024-07-26 13:43:50.969558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.969975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.634 [2024-07-26 13:43:50.969989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.634 [2024-07-26 13:43:50.969998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.634 [2024-07-26 13:43:50.970160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.634 [2024-07-26 13:43:50.970330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.634 [2024-07-26 13:43:50.970340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.634 [2024-07-26 13:43:50.970347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.634 [2024-07-26 13:43:50.972616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:50.981390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:50.981872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:50.982330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:50.982340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:50.982348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:50.982491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:50.982615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:50.982622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:50.982630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:50.984833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:50.993800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:50.994547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:50.995028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:50.995040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:50.995050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:50.995156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:50.995297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:50.995307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:50.995315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:50.997645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.006338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.007083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.007572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.007588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.007601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.007819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.007947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.007956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.007963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.010259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.018821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.019489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.019984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.019994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.020002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.020145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.020311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.020320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.020327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.022629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.031471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.032079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.032617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.032627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.032634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.032776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.032919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.032927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.032935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.035165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.043860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.044526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.044931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.044941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.044948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.045080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.045226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.045234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.045242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.047500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.056385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.057025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.057575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.057612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.057624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.057771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.057919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.057927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.057935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.060321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.068890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.069586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.070072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.070085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.070094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.070205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.635 [2024-07-26 13:43:51.070315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.635 [2024-07-26 13:43:51.070323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.635 [2024-07-26 13:43:51.070331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.635 [2024-07-26 13:43:51.072515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.635 [2024-07-26 13:43:51.081395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.635 [2024-07-26 13:43:51.082072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.082385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.635 [2024-07-26 13:43:51.082401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.635 [2024-07-26 13:43:51.082410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.635 [2024-07-26 13:43:51.082517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.636 [2024-07-26 13:43:51.082631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.636 [2024-07-26 13:43:51.082639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.636 [2024-07-26 13:43:51.082646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.636 [2024-07-26 13:43:51.084925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.636 [2024-07-26 13:43:51.094058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.636 [2024-07-26 13:43:51.094786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.636 [2024-07-26 13:43:51.095270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.636 [2024-07-26 13:43:51.095284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.636 [2024-07-26 13:43:51.095293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.636 [2024-07-26 13:43:51.095510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.636 [2024-07-26 13:43:51.095694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.636 [2024-07-26 13:43:51.095703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.636 [2024-07-26 13:43:51.095710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.636 [2024-07-26 13:43:51.097911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.898 [2024-07-26 13:43:51.106462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.898 [2024-07-26 13:43:51.107115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-07-26 13:43:51.107565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-07-26 13:43:51.107576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.898 [2024-07-26 13:43:51.107583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.898 [2024-07-26 13:43:51.107690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.898 [2024-07-26 13:43:51.107870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.898 [2024-07-26 13:43:51.107878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.898 [2024-07-26 13:43:51.107885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.898 [2024-07-26 13:43:51.110015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.898 [2024-07-26 13:43:51.119262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.898 [2024-07-26 13:43:51.119876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-07-26 13:43:51.120333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.898 [2024-07-26 13:43:51.120344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.898 [2024-07-26 13:43:51.120351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.120494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.120637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.120649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.120656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.122979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.131814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.132464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.132925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.132935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.132942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.133067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.133192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.133205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.133218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.135615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.144338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.145075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.145575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.145590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.145599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.145725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.145834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.145842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.145849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.148108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.156873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.157518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.157979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.157989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.157996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.158140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.158305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.158314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.158324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.160628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.169418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.170152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.170677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.170691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.170700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.170843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.171027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.171035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.171043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.173577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.181951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.182633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.183119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.183132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.183141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.183298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.183427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.183435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.183442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.185791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.194209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.194893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.195486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.195523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.195533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.195713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.195934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.195943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.195951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.198179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.206797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.207448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.207937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.207950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.207959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.208158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.208334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.208345] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.208353] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.210733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.899 [2024-07-26 13:43:51.219225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.899 [2024-07-26 13:43:51.219802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.220155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.899 [2024-07-26 13:43:51.220168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.899 [2024-07-26 13:43:51.220177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.899 [2024-07-26 13:43:51.220327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.899 [2024-07-26 13:43:51.220510] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.899 [2024-07-26 13:43:51.220519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.899 [2024-07-26 13:43:51.220526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.899 [2024-07-26 13:43:51.222771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.231639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.232429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.232912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.232925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.232934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.233079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.233249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.233258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.233265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.235369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.244205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.244931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.245504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.245540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.245551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.245694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.245859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.245868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.245875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.248208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.256759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.257459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.257945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.257958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.257967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.258148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.258325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.258335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.258342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.260562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.269366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.270094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.270582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.270596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.270605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.270767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.270932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.270940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.270948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.273261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.281903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.282602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.283089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.283102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.283111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.283269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.283379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.283387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.283395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.285597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.294225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.294932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.295518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.295554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.295564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.295708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.295891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.295900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.295908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.298261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.306632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.307419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.307813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.307825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.307835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.307979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.308144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.308152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.308160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.310427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.319254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.319984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.320553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.320572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.320581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.900 [2024-07-26 13:43:51.320744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.900 [2024-07-26 13:43:51.320890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.900 [2024-07-26 13:43:51.320898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.900 [2024-07-26 13:43:51.320906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.900 [2024-07-26 13:43:51.323164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.900 [2024-07-26 13:43:51.331658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.900 [2024-07-26 13:43:51.332421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.332902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.900 [2024-07-26 13:43:51.332915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.900 [2024-07-26 13:43:51.332924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.901 [2024-07-26 13:43:51.333105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.901 [2024-07-26 13:43:51.333264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.901 [2024-07-26 13:43:51.333275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.901 [2024-07-26 13:43:51.333282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.901 [2024-07-26 13:43:51.335593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.901 [2024-07-26 13:43:51.344196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.901 [2024-07-26 13:43:51.344930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-07-26 13:43:51.345416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-07-26 13:43:51.345430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.901 [2024-07-26 13:43:51.345439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.901 [2024-07-26 13:43:51.345527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.901 [2024-07-26 13:43:51.345655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.901 [2024-07-26 13:43:51.345663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.901 [2024-07-26 13:43:51.345670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.901 [2024-07-26 13:43:51.347836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.901 [2024-07-26 13:43:51.356807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.901 [2024-07-26 13:43:51.357309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-07-26 13:43:51.357801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.901 [2024-07-26 13:43:51.357813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:53.901 [2024-07-26 13:43:51.357826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:53.901 [2024-07-26 13:43:51.357970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:53.901 [2024-07-26 13:43:51.358079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.901 [2024-07-26 13:43:51.358087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.901 [2024-07-26 13:43:51.358094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.901 [2024-07-26 13:43:51.360428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.901 [2024-07-26 13:43:51.369282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.901 [2024-07-26 13:43:51.369911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.163 [2024-07-26 13:43:51.370482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.163 [2024-07-26 13:43:51.370498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.163 [2024-07-26 13:43:51.370507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.163 [2024-07-26 13:43:51.370670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.163 [2024-07-26 13:43:51.370779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.163 [2024-07-26 13:43:51.370787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.163 [2024-07-26 13:43:51.370795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.163 [2024-07-26 13:43:51.372891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.381834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.382640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.383125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.383138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.383147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.383359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.383527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.383540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.383548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.385564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.394107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.394809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.395443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.395479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.395489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.395619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.395784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.395792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.395800] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.397973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.406541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.407212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.407697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.407710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.407719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.407862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.408027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.408035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.408043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.410256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.418938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.419672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.420158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.420171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.420180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.420353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.420483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.420491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.420498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.422699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.431522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.432182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.432659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.432695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.432706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.432886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.433018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.433027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.433035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.435276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.444041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.444744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.445230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.445245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.445254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.445397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.445544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.445553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.445560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.448146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.456485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.164 [2024-07-26 13:43:51.457184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.457697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.164 [2024-07-26 13:43:51.457710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.164 [2024-07-26 13:43:51.457719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.164 [2024-07-26 13:43:51.457844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.164 [2024-07-26 13:43:51.457935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.164 [2024-07-26 13:43:51.457943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.164 [2024-07-26 13:43:51.457951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.164 [2024-07-26 13:43:51.460162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.164 [2024-07-26 13:43:51.469066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.469842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.470334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.470349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.470358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.470557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.470722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.470734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.470742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.472849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.481561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.482213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.482697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.482707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.482714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.482857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.483056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.483064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.483070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.485364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.493887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.494333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.494824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.494836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.494843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.494991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.495117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.495124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.495131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.497458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.506158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.506790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.507127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.507138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.507146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.507298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.507461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.507469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.507479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.509721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.518535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.519263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.519757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.519770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.519779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.519959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.520068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.520077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.520084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.522518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.530917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.531617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.532103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.532115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.532125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.532280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.532373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.532381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.532389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.534497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.543398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.544109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.544627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.544642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.544651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.544832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.544960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.544969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.544976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.547294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.556185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.556828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.557424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.557461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.557471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.557616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.557762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.557771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.557778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.560001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.165 [2024-07-26 13:43:51.568709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.165 [2024-07-26 13:43:51.569434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.569918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.165 [2024-07-26 13:43:51.569931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.165 [2024-07-26 13:43:51.569940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.165 [2024-07-26 13:43:51.570103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.165 [2024-07-26 13:43:51.570255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.165 [2024-07-26 13:43:51.570264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.165 [2024-07-26 13:43:51.570271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.165 [2024-07-26 13:43:51.572469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.166 [2024-07-26 13:43:51.581204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.166 [2024-07-26 13:43:51.581907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.582386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.582400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.166 [2024-07-26 13:43:51.582410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.166 [2024-07-26 13:43:51.582590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.166 [2024-07-26 13:43:51.582755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.166 [2024-07-26 13:43:51.582763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.166 [2024-07-26 13:43:51.582770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.166 [2024-07-26 13:43:51.585231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.166 [2024-07-26 13:43:51.593722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.166 [2024-07-26 13:43:51.594439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.594925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.594939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.166 [2024-07-26 13:43:51.594948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.166 [2024-07-26 13:43:51.595110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.166 [2024-07-26 13:43:51.595268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.166 [2024-07-26 13:43:51.595278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.166 [2024-07-26 13:43:51.595285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.166 [2024-07-26 13:43:51.597541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.166 [2024-07-26 13:43:51.606130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.166 [2024-07-26 13:43:51.606782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.607276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.607294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.166 [2024-07-26 13:43:51.607309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.166 [2024-07-26 13:43:51.607452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.166 [2024-07-26 13:43:51.607562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.166 [2024-07-26 13:43:51.607570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.166 [2024-07-26 13:43:51.607577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.166 [2024-07-26 13:43:51.609818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.166 [2024-07-26 13:43:51.618753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.166 [2024-07-26 13:43:51.619455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.619941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.619953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.166 [2024-07-26 13:43:51.619962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.166 [2024-07-26 13:43:51.620088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.166 [2024-07-26 13:43:51.620246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.166 [2024-07-26 13:43:51.620257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.166 [2024-07-26 13:43:51.620264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.166 [2024-07-26 13:43:51.622608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.166 [2024-07-26 13:43:51.631178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.166 [2024-07-26 13:43:51.631866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.632354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.166 [2024-07-26 13:43:51.632369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.166 [2024-07-26 13:43:51.632379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.166 [2024-07-26 13:43:51.632541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.166 [2024-07-26 13:43:51.632706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.166 [2024-07-26 13:43:51.632715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.166 [2024-07-26 13:43:51.632722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.166 [2024-07-26 13:43:51.635091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.429 [2024-07-26 13:43:51.643477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.429 [2024-07-26 13:43:51.644059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.429 [2024-07-26 13:43:51.644604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.429 [2024-07-26 13:43:51.644641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.429 [2024-07-26 13:43:51.644652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.429 [2024-07-26 13:43:51.644833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.429 [2024-07-26 13:43:51.644942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.644951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.644958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.647197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.656062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.656636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.657113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.657126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.657135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.657270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.657419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.657427] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.657434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.659439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.668541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.669243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.669729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.669746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.669756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.669955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.670101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.670109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.670116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.672395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.681075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.681792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.682455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.682492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.682502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.682646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.682792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.682801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.682809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.685158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.693652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.694398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.694882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.694895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.694904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.695048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.695194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.695235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.695243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.697425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.706160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.706848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.707333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.707347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.707360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.707504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.707669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.707678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.707685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.709903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.718783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.719398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.719877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.719887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.719895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.720020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.720127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.720136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.720144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.722434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.731047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.731449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.731953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.731962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.731969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.732131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.732316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.732324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.732331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.734616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.743693] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.744340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.744821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.744831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.744838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.745003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.430 [2024-07-26 13:43:51.745165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.430 [2024-07-26 13:43:51.745173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.430 [2024-07-26 13:43:51.745180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.430 [2024-07-26 13:43:51.747351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.430 [2024-07-26 13:43:51.756530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.430 [2024-07-26 13:43:51.757141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.757394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.430 [2024-07-26 13:43:51.757405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.430 [2024-07-26 13:43:51.757412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.430 [2024-07-26 13:43:51.757555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.757698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.757705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.757713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.760099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.768915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.769527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.769988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.769997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.770004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.770166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.770313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.770321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.770329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.772535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.781347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.781801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.782277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.782287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.782294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.782401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.782585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.782593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.782600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.784896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.793884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.794472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.794932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.794941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.794948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.795073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.795226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.795237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.795244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.797418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.806508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.807084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.807423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.807434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.807442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.807621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.807782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.807791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.807797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.810082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.818965] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.819610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.820086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.820096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.820103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.820306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.820449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.820461] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.820467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.822680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.831594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.832139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.832622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.832632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.832639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.832783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.832962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.832970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.832977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.835273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.844340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.844953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.845397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.845434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.845444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.845570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.845717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.845726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.845733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.848059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.856835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.857520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.858007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.858020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.431 [2024-07-26 13:43:51.858029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.431 [2024-07-26 13:43:51.858155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.431 [2024-07-26 13:43:51.858272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.431 [2024-07-26 13:43:51.858281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.431 [2024-07-26 13:43:51.858293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.431 [2024-07-26 13:43:51.860634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.431 [2024-07-26 13:43:51.869335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.431 [2024-07-26 13:43:51.869950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.870396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.431 [2024-07-26 13:43:51.870433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.432 [2024-07-26 13:43:51.870443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.432 [2024-07-26 13:43:51.870569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.432 [2024-07-26 13:43:51.870734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.432 [2024-07-26 13:43:51.870743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.432 [2024-07-26 13:43:51.870751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.432 [2024-07-26 13:43:51.873137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.432 [2024-07-26 13:43:51.881726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.432 [2024-07-26 13:43:51.882433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.432 [2024-07-26 13:43:51.882837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.432 [2024-07-26 13:43:51.882852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.432 [2024-07-26 13:43:51.882861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.432 [2024-07-26 13:43:51.883043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.432 [2024-07-26 13:43:51.883190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.432 [2024-07-26 13:43:51.883198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.432 [2024-07-26 13:43:51.883213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.432 [2024-07-26 13:43:51.885558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.432 [2024-07-26 13:43:51.894246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.432 [2024-07-26 13:43:51.894815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.432 [2024-07-26 13:43:51.895253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.432 [2024-07-26 13:43:51.895264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.432 [2024-07-26 13:43:51.895272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.432 [2024-07-26 13:43:51.895397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.432 [2024-07-26 13:43:51.895560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.432 [2024-07-26 13:43:51.895569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.432 [2024-07-26 13:43:51.895576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.432 [2024-07-26 13:43:51.897807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.695 [2024-07-26 13:43:51.906824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.695 [2024-07-26 13:43:51.907431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.695 [2024-07-26 13:43:51.907861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.695 [2024-07-26 13:43:51.907871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.695 [2024-07-26 13:43:51.907878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.695 [2024-07-26 13:43:51.908004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.695 [2024-07-26 13:43:51.908147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.695 [2024-07-26 13:43:51.908155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.695 [2024-07-26 13:43:51.908163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.910336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.919331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.919976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.920454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.920491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.920501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.920646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.920828] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.920837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.920845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.923197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.931794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.932451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.932952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.932962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.932970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.933151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.933318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.933326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.933333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.935623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.944186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.944844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.945439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.945476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.945487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.945649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.945796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.945805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.945812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.948147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.956666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.957449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.957981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.957995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.958005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.958167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.958341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.958351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.958359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.960664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.969254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.969785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.970255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.970266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.970274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.970399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.970487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.970495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.970502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.972597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.981873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.982369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.982838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.982847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.982854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.982980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.983104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.983112] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.983118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.985492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:51.994426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:51.995040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.995606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:51.995643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:51.995654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:51.995852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:51.995981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:51.995990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:51.995998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:51.998273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.696 [2024-07-26 13:43:52.007085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.696 [2024-07-26 13:43:52.007751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:52.008390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.696 [2024-07-26 13:43:52.008427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.696 [2024-07-26 13:43:52.008438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.696 [2024-07-26 13:43:52.008600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.696 [2024-07-26 13:43:52.008710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.696 [2024-07-26 13:43:52.008719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.696 [2024-07-26 13:43:52.008726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.696 [2024-07-26 13:43:52.010930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.019417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.020035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.020598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.020639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.020650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.020812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.020959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.020968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.020976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.023186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.031736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.032482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.032956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.032967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.032975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.033123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.033292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.033301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.033308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.035713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.044297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.044959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.045511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.045548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.045558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.045740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.045886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.045895] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.045903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.048290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.056707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.057439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.057932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.057944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.057958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.058084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.058256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.058266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.058273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.060504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.069150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.069772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.070238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.070258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.070266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.070396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.070558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.070567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.070574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.072750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.081907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.082515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.082992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.083002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.083010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.083172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.083338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.083346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.083353] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.085632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.094539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.095183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.095738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.095775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.095785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.095971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.096117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.096126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.096133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.098479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.107075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.107746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.108221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.108237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.108245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.108427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.697 [2024-07-26 13:43:52.108571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.697 [2024-07-26 13:43:52.108579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.697 [2024-07-26 13:43:52.108586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.697 [2024-07-26 13:43:52.110729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.697 [2024-07-26 13:43:52.119769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.697 [2024-07-26 13:43:52.120381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.120844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.697 [2024-07-26 13:43:52.120854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.697 [2024-07-26 13:43:52.120861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.697 [2024-07-26 13:43:52.121040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.698 [2024-07-26 13:43:52.121183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.698 [2024-07-26 13:43:52.121191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.698 [2024-07-26 13:43:52.121197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.698 [2024-07-26 13:43:52.123399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.698 [2024-07-26 13:43:52.132398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.698 [2024-07-26 13:43:52.133009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.133555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.133592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.698 [2024-07-26 13:43:52.133603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.698 [2024-07-26 13:43:52.133729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.698 [2024-07-26 13:43:52.133880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.698 [2024-07-26 13:43:52.133889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.698 [2024-07-26 13:43:52.133897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.698 [2024-07-26 13:43:52.136228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.698 [2024-07-26 13:43:52.145223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.698 [2024-07-26 13:43:52.145811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.146406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.146443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.698 [2024-07-26 13:43:52.146453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.698 [2024-07-26 13:43:52.146634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.698 [2024-07-26 13:43:52.146761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.698 [2024-07-26 13:43:52.146770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.698 [2024-07-26 13:43:52.146777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.698 [2024-07-26 13:43:52.149086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.698 [2024-07-26 13:43:52.157746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.698 [2024-07-26 13:43:52.158435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.158895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.698 [2024-07-26 13:43:52.158905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.698 [2024-07-26 13:43:52.158913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.698 [2024-07-26 13:43:52.159112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.698 [2024-07-26 13:43:52.159252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.698 [2024-07-26 13:43:52.159270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.698 [2024-07-26 13:43:52.159277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.698 [2024-07-26 13:43:52.161803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.961 [2024-07-26 13:43:52.170181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.961 [2024-07-26 13:43:52.170647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.961 [2024-07-26 13:43:52.171108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.961 [2024-07-26 13:43:52.171118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.961 [2024-07-26 13:43:52.171125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.961 [2024-07-26 13:43:52.171280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.961 [2024-07-26 13:43:52.171425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.961 [2024-07-26 13:43:52.171437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.961 [2024-07-26 13:43:52.171445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.961 [2024-07-26 13:43:52.173828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.961 [2024-07-26 13:43:52.182757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.961 [2024-07-26 13:43:52.183496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.961 [2024-07-26 13:43:52.183979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.961 [2024-07-26 13:43:52.183992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.961 [2024-07-26 13:43:52.184001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.961 [2024-07-26 13:43:52.184163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.961 [2024-07-26 13:43:52.184317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.961 [2024-07-26 13:43:52.184326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.961 [2024-07-26 13:43:52.184334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.961 [2024-07-26 13:43:52.186702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.961 [2024-07-26 13:43:52.195123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.961 [2024-07-26 13:43:52.195751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.196416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.196453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.196464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.196590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.196718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.196726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.196733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.199054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.207451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.208101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.208347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.208363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.208371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.208537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.208662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.208670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.208682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.210974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.220060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.220685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.221151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.221161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.221168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.221355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.221480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.221488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.221494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.223845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.232548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.233190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.233574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.233584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.233591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.233772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.233896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.233904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.233911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.236340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.245132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.245791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.246407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.246444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.246455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.246636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.246820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.246829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.246837] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.249145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.257605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.258161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.258667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.258678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.258685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.258810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.258953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.258962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.258969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.261271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.270000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.270618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.271113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.271123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.271130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.271289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.271397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.271405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.271412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.273735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.282365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.283017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.283604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.283641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.283652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.283817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.283926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.283935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.283942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.286339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.295000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.962 [2024-07-26 13:43:52.295621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.296099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.962 [2024-07-26 13:43:52.296109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.962 [2024-07-26 13:43:52.296117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.962 [2024-07-26 13:43:52.296340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.962 [2024-07-26 13:43:52.296502] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.962 [2024-07-26 13:43:52.296510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.962 [2024-07-26 13:43:52.296517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.962 [2024-07-26 13:43:52.298821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.962 [2024-07-26 13:43:52.307357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.307965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.308510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.308546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.308556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.308681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.308790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.308799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.308807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.311111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.319953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.320607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.321069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.321079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.321087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.321235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.321379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.321387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.321394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.323562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.332575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.333387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.333829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.333842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.333851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.333995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.334159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.334168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.334175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.336340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.345156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.345775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.346232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.346243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.346250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.346431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.346593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.346601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.346609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.348959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.357830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.358484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.358944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.358954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.358962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.359104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.359297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.359307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.359314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.361472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.370372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.371040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.371581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.371622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.371633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.371795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.371941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.371950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.371958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.374111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.382839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.383522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.384072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.384084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.384094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.384225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.384335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.384344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.384352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.386536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.395317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.395936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.396496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.396532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.396542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.396705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.396814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.396823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.396831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.399132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.963 [2024-07-26 13:43:52.408082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.963 [2024-07-26 13:43:52.408763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.409238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.963 [2024-07-26 13:43:52.409258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.963 [2024-07-26 13:43:52.409270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.963 [2024-07-26 13:43:52.409399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.963 [2024-07-26 13:43:52.409598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.963 [2024-07-26 13:43:52.409606] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.963 [2024-07-26 13:43:52.409613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.963 [2024-07-26 13:43:52.411716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.964 [2024-07-26 13:43:52.420566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.964 [2024-07-26 13:43:52.421179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.964 [2024-07-26 13:43:52.421645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.964 [2024-07-26 13:43:52.421656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:54.964 [2024-07-26 13:43:52.421663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:54.964 [2024-07-26 13:43:52.421807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:54.964 [2024-07-26 13:43:52.421932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.964 [2024-07-26 13:43:52.421940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.964 [2024-07-26 13:43:52.421947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.964 [2024-07-26 13:43:52.424248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.433232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.226 [2024-07-26 13:43:52.433902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.434450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.434487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.226 [2024-07-26 13:43:52.434498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.226 [2024-07-26 13:43:52.434641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.226 [2024-07-26 13:43:52.434788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.226 [2024-07-26 13:43:52.434797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.226 [2024-07-26 13:43:52.434804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.226 [2024-07-26 13:43:52.437225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.445774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.226 [2024-07-26 13:43:52.446524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.446872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.446884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.226 [2024-07-26 13:43:52.446894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.226 [2024-07-26 13:43:52.447061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.226 [2024-07-26 13:43:52.447220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.226 [2024-07-26 13:43:52.447234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.226 [2024-07-26 13:43:52.447244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.226 [2024-07-26 13:43:52.449501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.458261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.226 [2024-07-26 13:43:52.458918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.459289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.459300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.226 [2024-07-26 13:43:52.459308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.226 [2024-07-26 13:43:52.459470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.226 [2024-07-26 13:43:52.459614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.226 [2024-07-26 13:43:52.459622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.226 [2024-07-26 13:43:52.459629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.226 [2024-07-26 13:43:52.461917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.470815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.226 [2024-07-26 13:43:52.471477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.471846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.471855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.226 [2024-07-26 13:43:52.471862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.226 [2024-07-26 13:43:52.472005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.226 [2024-07-26 13:43:52.472148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.226 [2024-07-26 13:43:52.472156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.226 [2024-07-26 13:43:52.472163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.226 [2024-07-26 13:43:52.474490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.483337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.226 [2024-07-26 13:43:52.483951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.484509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.226 [2024-07-26 13:43:52.484545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.226 [2024-07-26 13:43:52.484556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.226 [2024-07-26 13:43:52.484718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.226 [2024-07-26 13:43:52.484869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.226 [2024-07-26 13:43:52.484878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.226 [2024-07-26 13:43:52.484886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.226 [2024-07-26 13:43:52.487398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.226 [2024-07-26 13:43:52.496031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.496689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.497150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.497159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.497167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.497333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.497495] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.497504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.497510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.499877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.508530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.509044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.509629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.509665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.509676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.509838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.509984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.509994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.510001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.512330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.521152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.521763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.522236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.522256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.522264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.522449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.522592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.522604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.522611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.525002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.533581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.534240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.534594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.534603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.534611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.534758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.534919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.534927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.534934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.537026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.546116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.546834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.547221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.547238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.547247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.547428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.547574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.547582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.547589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.549810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.558628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.559277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.559777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.559786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.559794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.559882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.560062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.560070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.560081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.562351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.570921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.571484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.571976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.571989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.571998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.572180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.572302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.572312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.572320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.574613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.583499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.584251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.584730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.584743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.584752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.584932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.585041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.585050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.585057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.587435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.227 [2024-07-26 13:43:52.595871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.227 [2024-07-26 13:43:52.596587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.597073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.227 [2024-07-26 13:43:52.597086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.227 [2024-07-26 13:43:52.597095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.227 [2024-07-26 13:43:52.597270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.227 [2024-07-26 13:43:52.597400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.227 [2024-07-26 13:43:52.597408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.227 [2024-07-26 13:43:52.597416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.227 [2024-07-26 13:43:52.599699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.608385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.609063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.609577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.609591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.609600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.609762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.609927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.609935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.609942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.612196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.621080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.621803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.622285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.622299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.622308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.622453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.622616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.622625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.622632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.624816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.633549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.634232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.634696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.634709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.634718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.634899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.635008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.635017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.635024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.637325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.646040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.646648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.647109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.647119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.647127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.647311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.647436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.647445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.647452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.649587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.658422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.659146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.659635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.659649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.659658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.659839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.660003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.660011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.660019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.662227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.670823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.671607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.672039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.672050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.672058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.672242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.672387] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.672395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.672402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.674541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.683168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.683886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.684372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.684387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.684396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.684577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.684705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.684713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.684721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.228 [2024-07-26 13:43:52.687127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.228 [2024-07-26 13:43:52.695486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.228 [2024-07-26 13:43:52.696137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.696681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.228 [2024-07-26 13:43:52.696692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.228 [2024-07-26 13:43:52.696700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.228 [2024-07-26 13:43:52.696788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.228 [2024-07-26 13:43:52.696969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.228 [2024-07-26 13:43:52.696977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.228 [2024-07-26 13:43:52.696984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.699267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.707878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.708546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.709032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.709044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.709054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.709265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.709394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.709403] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.709410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.711612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.720345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.721076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.721593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.721611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.721621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.721801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.721911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.721919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.721926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.724207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.732810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.733423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.733783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.733793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.733800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.733944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.734106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.734114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.734121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.736462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.745162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.745777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.746242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.746252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.746259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.746385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.746546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.746554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.746560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.748974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.757768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.758510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.758995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.759008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.759021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.759211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.759345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.759354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.759361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.761525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.769914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.770663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.771151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.771163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.771173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.771379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.771507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.771516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.771523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.773782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.782424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.782922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.783505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.491 [2024-07-26 13:43:52.783548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.491 [2024-07-26 13:43:52.783561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.491 [2024-07-26 13:43:52.783743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.491 [2024-07-26 13:43:52.783890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.491 [2024-07-26 13:43:52.783898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.491 [2024-07-26 13:43:52.783905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.491 [2024-07-26 13:43:52.786098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.491 [2024-07-26 13:43:52.795085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.491 [2024-07-26 13:43:52.795773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.796388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.796425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.796435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.796624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.796752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.796761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.796769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.799139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.807721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.808449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.808851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.808866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.808875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.809038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.809214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.809224] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.809232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.811399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.820331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.820986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.821534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.821571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.821581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.821726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.821910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.821918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.821926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.824269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.832944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.833592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.834085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.834094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.834102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.834269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.834436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.834444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.834450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.836582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.845601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.846155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.846616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.846626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.846634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.846740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.846883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.846891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.846898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.849240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.857947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.858583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.859063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.859076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.859085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.859240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.859351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.859360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.859367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.861625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.870399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.871163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.871541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.871579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.871591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.871754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.871900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.871912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.871920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.874294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.882926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.883641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.884127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.884140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.884149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.884305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.884471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.884480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.884487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.886666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.895339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.492 [2024-07-26 13:43:52.895895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.896477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.492 [2024-07-26 13:43:52.896514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.492 [2024-07-26 13:43:52.896525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.492 [2024-07-26 13:43:52.896669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.492 [2024-07-26 13:43:52.896853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.492 [2024-07-26 13:43:52.896862] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.492 [2024-07-26 13:43:52.896869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.492 [2024-07-26 13:43:52.899143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.492 [2024-07-26 13:43:52.907874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.493 [2024-07-26 13:43:52.908525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.909023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.909036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.493 [2024-07-26 13:43:52.909045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.493 [2024-07-26 13:43:52.909218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.493 [2024-07-26 13:43:52.909405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.493 [2024-07-26 13:43:52.909414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.493 [2024-07-26 13:43:52.909426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.493 [2024-07-26 13:43:52.911753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.493 [2024-07-26 13:43:52.920499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.493 [2024-07-26 13:43:52.921228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.921600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.921612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.493 [2024-07-26 13:43:52.921621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.493 [2024-07-26 13:43:52.921766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.493 [2024-07-26 13:43:52.921930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.493 [2024-07-26 13:43:52.921938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.493 [2024-07-26 13:43:52.921946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.493 [2024-07-26 13:43:52.924190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.493 [2024-07-26 13:43:52.932958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.493 [2024-07-26 13:43:52.933694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.934179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.934191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.493 [2024-07-26 13:43:52.934209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.493 [2024-07-26 13:43:52.934321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.493 [2024-07-26 13:43:52.934505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.493 [2024-07-26 13:43:52.934514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.493 [2024-07-26 13:43:52.934521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.493 [2024-07-26 13:43:52.936575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.493 [2024-07-26 13:43:52.945643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.493 [2024-07-26 13:43:52.946448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.946926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.946939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.493 [2024-07-26 13:43:52.946948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.493 [2024-07-26 13:43:52.947110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.493 [2024-07-26 13:43:52.947269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.493 [2024-07-26 13:43:52.947279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.493 [2024-07-26 13:43:52.947287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1181741 Killed "${NVMF_APP[@]}" "$@" 00:32:55.493 [2024-07-26 13:43:52.949600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.493 13:43:52 -- host/bdevperf.sh@36 -- # tgt_init 00:32:55.493 13:43:52 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:55.493 13:43:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:55.493 13:43:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:55.493 13:43:52 -- common/autotest_common.sh@10 -- # set +x 00:32:55.493 13:43:52 -- nvmf/common.sh@469 -- # nvmfpid=1183462 00:32:55.493 [2024-07-26 13:43:52.958193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.493 13:43:52 -- nvmf/common.sh@470 -- # waitforlisten 1183462 00:32:55.493 13:43:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:55.493 13:43:52 -- common/autotest_common.sh@819 -- # '[' -z 1183462 ']' 00:32:55.493 13:43:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.493 [2024-07-26 13:43:52.958937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 13:43:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:55.493 13:43:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.493 [2024-07-26 13:43:52.959473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.493 [2024-07-26 13:43:52.959489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.493 [2024-07-26 13:43:52.959498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.493 13:43:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:55.493 [2024-07-26 13:43:52.959679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.493 13:43:52 -- common/autotest_common.sh@10 -- # set +x 00:32:55.493 [2024-07-26 13:43:52.959845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.493 [2024-07-26 13:43:52.959854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.493 [2024-07-26 13:43:52.959861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.493 [2024-07-26 13:43:52.962170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.756 [2024-07-26 13:43:52.971026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.756 [2024-07-26 13:43:52.971680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.972163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.972173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.756 [2024-07-26 13:43:52.972181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.756 [2024-07-26 13:43:52.972384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.756 [2024-07-26 13:43:52.972546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.756 [2024-07-26 13:43:52.972554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.756 [2024-07-26 13:43:52.972561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.756 [2024-07-26 13:43:52.975044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.756 [2024-07-26 13:43:52.983702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.756 [2024-07-26 13:43:52.984417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.985010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.985023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.756 [2024-07-26 13:43:52.985032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.756 [2024-07-26 13:43:52.985139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.756 [2024-07-26 13:43:52.985311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.756 [2024-07-26 13:43:52.985320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.756 [2024-07-26 13:43:52.985327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.756 [2024-07-26 13:43:52.987611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.756 [2024-07-26 13:43:52.996077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.756 [2024-07-26 13:43:52.996714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.997120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.756 [2024-07-26 13:43:52.997133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.756 [2024-07-26 13:43:52.997142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.756 [2024-07-26 13:43:52.997312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.756 [2024-07-26 13:43:52.997459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.756 [2024-07-26 13:43:52.997468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.756 [2024-07-26 13:43:52.997476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.756 [2024-07-26 13:43:52.999702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.756 [2024-07-26 13:43:53.006278] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:55.756 [2024-07-26 13:43:53.006323] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.756 [2024-07-26 13:43:53.008614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.009430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.009981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.009994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.010004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.010209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.010338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.010347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.010356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.012663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.021313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.021967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.022544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.022581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.022592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.022736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.022919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.022927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.022935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.025178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.033976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.034702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.035191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.035211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.035221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.035347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.035549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.035558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.035565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.037829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.757 [2024-07-26 13:43:53.046363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.047046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.047479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.047517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.047527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.047652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.047781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.047791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.047798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.049925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.058836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.059578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.060071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.060084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.060093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.060250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.060435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.060444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.060452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.062485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.071456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.072087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.072713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.072749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.072761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.072886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.073052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.073061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.073069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.075312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.083999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.084670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.085138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.085148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.085156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.085265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.085409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.085417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.085425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.087783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.757 [2024-07-26 13:43:53.089081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:55.757 [2024-07-26 13:43:53.096442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.757 [2024-07-26 13:43:53.097134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.097728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.757 [2024-07-26 13:43:53.097765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.757 [2024-07-26 13:43:53.097776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.757 [2024-07-26 13:43:53.097923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.757 [2024-07-26 13:43:53.098052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.757 [2024-07-26 13:43:53.098061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.757 [2024-07-26 13:43:53.098069] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.757 [2024-07-26 13:43:53.100326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.109073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.109775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.110239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.110259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.110269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.110383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.110490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.110499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.110506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.112565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.115864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:55.758 [2024-07-26 13:43:53.115950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.758 [2024-07-26 13:43:53.115955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.758 [2024-07-26 13:43:53.115961] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.758 [2024-07-26 13:43:53.115993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.758 [2024-07-26 13:43:53.116179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.758 [2024-07-26 13:43:53.116181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.758 [2024-07-26 13:43:53.121722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.122489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.122989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.123002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.123012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.123195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.123354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.123363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.123371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.125545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.134054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.134636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.135146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.135156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.135164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.135294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.135456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.135464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.135472] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.137805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.146486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.147075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.147604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.147643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.147653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.147799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.147945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.147954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.147962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.150312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.158923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.159681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.160186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.160199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.160222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.160409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.160519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.160532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.160540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.163131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.171553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.172176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.172748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.172785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.172795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.172958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.173085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.173093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.173101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.175222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.184005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.184726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.185138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.185151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.185160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.185316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.185409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.185417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.185425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.187865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.196336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.197006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.197566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.197602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.758 [2024-07-26 13:43:53.197613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.758 [2024-07-26 13:43:53.197776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.758 [2024-07-26 13:43:53.197922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.758 [2024-07-26 13:43:53.197930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.758 [2024-07-26 13:43:53.197946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.758 [2024-07-26 13:43:53.200376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.758 [2024-07-26 13:43:53.208638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.758 [2024-07-26 13:43:53.209452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.209953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.758 [2024-07-26 13:43:53.209966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.759 [2024-07-26 13:43:53.209975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.759 [2024-07-26 13:43:53.210137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.759 [2024-07-26 13:43:53.210289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.759 [2024-07-26 13:43:53.210298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.759 [2024-07-26 13:43:53.210305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.759 [2024-07-26 13:43:53.212911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.759 [2024-07-26 13:43:53.221334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.759 [2024-07-26 13:43:53.221924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.759 [2024-07-26 13:43:53.222526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.759 [2024-07-26 13:43:53.222564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:55.759 [2024-07-26 13:43:53.222575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:55.759 [2024-07-26 13:43:53.222704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:55.759 [2024-07-26 13:43:53.222795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.759 [2024-07-26 13:43:53.222803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.759 [2024-07-26 13:43:53.222811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.759 [2024-07-26 13:43:53.224997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.020 [2024-07-26 13:43:53.233692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.020 [2024-07-26 13:43:53.234505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.020 [2024-07-26 13:43:53.234916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.020 [2024-07-26 13:43:53.234928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.020 [2024-07-26 13:43:53.234938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.020 [2024-07-26 13:43:53.235101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.020 [2024-07-26 13:43:53.235273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.020 [2024-07-26 13:43:53.235282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.020 [2024-07-26 13:43:53.235290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.020 [2024-07-26 13:43:53.237632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.020 [2024-07-26 13:43:53.246061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.246710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.247192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.247208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.247216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.247305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.247448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.247456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.247463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.249567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.258622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.258946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.259512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.259549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.259560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.259722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.259850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.259858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.259866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.262198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.271170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.271653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.272221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.272235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.272244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.272425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.272572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.272580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.272587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.274762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.283549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.284161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.284657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.284694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.284705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.284830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.284957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.284966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.284973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.287358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.296002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.296730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.297239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.297253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.297262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.297406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.297515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.297524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.297531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.299926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.308524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.309251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.309823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.309837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.309846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.309991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.310156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.310164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.310171] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.312476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.320997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.321689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.322137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.322150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.322159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.322335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.322483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.322492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.322499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.324754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.333499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.334175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.334767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.334804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.334814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.334958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.335104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.335113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.335121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.021 [2024-07-26 13:43:53.337238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.021 [2024-07-26 13:43:53.345946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.021 [2024-07-26 13:43:53.346692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.347217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.021 [2024-07-26 13:43:53.347232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.021 [2024-07-26 13:43:53.347241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.021 [2024-07-26 13:43:53.347348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.021 [2024-07-26 13:43:53.347457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.021 [2024-07-26 13:43:53.347466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.021 [2024-07-26 13:43:53.347473] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.349841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.358456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.359123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.359389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.359400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.359408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.359515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.359640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.359647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.359654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.362090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.370949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.371673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.372168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.372181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.372190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.372376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.372578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.372594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.372602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.374957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.383409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.383814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.384314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.384328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.384338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.384518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.384628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.384636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.384644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.386978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.395895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.396565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.397028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.397038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.397050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.397216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.397360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.397369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.397375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.399506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.408557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.409111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.409699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.409714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.409723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.409848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.409976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.409984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.409991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.412149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.421004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.421705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.422213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.422226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.422235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.422416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.422600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.422608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.422615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.424884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.433409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.434073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.434820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.434857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.434868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.435071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.435243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.435252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.435260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.437506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.445900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.446622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.447123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.447136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.447145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.447315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.447443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.022 [2024-07-26 13:43:53.447451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.022 [2024-07-26 13:43:53.447460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.022 [2024-07-26 13:43:53.449705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.022 [2024-07-26 13:43:53.458287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.022 [2024-07-26 13:43:53.459014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.459610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.022 [2024-07-26 13:43:53.459646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.022 [2024-07-26 13:43:53.459657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.022 [2024-07-26 13:43:53.459820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.022 [2024-07-26 13:43:53.459966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.023 [2024-07-26 13:43:53.459974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.023 [2024-07-26 13:43:53.459982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.023 [2024-07-26 13:43:53.462185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.023 [2024-07-26 13:43:53.470673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.023 [2024-07-26 13:43:53.471137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.023 [2024-07-26 13:43:53.471705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.023 [2024-07-26 13:43:53.471742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.023 [2024-07-26 13:43:53.471753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.023 [2024-07-26 13:43:53.471956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.023 [2024-07-26 13:43:53.472085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.023 [2024-07-26 13:43:53.472094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.023 [2024-07-26 13:43:53.472101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.023 [2024-07-26 13:43:53.474473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.023 [2024-07-26 13:43:53.483087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.023 [2024-07-26 13:43:53.483732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.023 [2024-07-26 13:43:53.483989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.023 [2024-07-26 13:43:53.484000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.023 [2024-07-26 13:43:53.484007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.023 [2024-07-26 13:43:53.484169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.023 [2024-07-26 13:43:53.484319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.023 [2024-07-26 13:43:53.484328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.023 [2024-07-26 13:43:53.484335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.023 [2024-07-26 13:43:53.486714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.285 [2024-07-26 13:43:53.495425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.285 [2024-07-26 13:43:53.495892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-26 13:43:53.496491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.285 [2024-07-26 13:43:53.496528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.285 [2024-07-26 13:43:53.496539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.285 [2024-07-26 13:43:53.496720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.285 [2024-07-26 13:43:53.496904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.285 [2024-07-26 13:43:53.496913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.285 [2024-07-26 13:43:53.496921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.285 [2024-07-26 13:43:53.499106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.285 [2024-07-26 13:43:53.508003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.508681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.509157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.509167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.509174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.509361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.509491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.509499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.509506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.511599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.520538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.520851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.521344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.521356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.521364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.521528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.521709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.521717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.521724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.524101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.533025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.533388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.533849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.533858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.533865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.534009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.534133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.534141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.534147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.536446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.545607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.546294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.546761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.546771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.546778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.546976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.547100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.547109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.547120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.549544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.558105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.558558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.559025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.559034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.559042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.559129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.559300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.559310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.559317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.561587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.570658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.571280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.571531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.571546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.571554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.571719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.571862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.571870] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.571877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.574223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.583117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.583695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.584189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.584198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.584211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.286 [2024-07-26 13:43:53.584318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.286 [2024-07-26 13:43:53.584480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.286 [2024-07-26 13:43:53.584488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.286 [2024-07-26 13:43:53.584499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.286 [2024-07-26 13:43:53.586747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.286 [2024-07-26 13:43:53.595661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.286 [2024-07-26 13:43:53.596033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.596272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.286 [2024-07-26 13:43:53.596285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.286 [2024-07-26 13:43:53.596292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.596402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.596527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.596536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.596542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.598775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.608156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.608788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.608994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.609009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.609016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.609181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.609312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.609328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.609335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.611648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.620541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.621162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.621251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.621261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.621268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.621449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.621592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.621600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.621607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.623899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.633154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.633795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.634303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.634313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.634321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.634410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.634534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.634542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.634549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.636681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.645445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.646066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.646462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.646472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.646479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.646678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.646784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.646792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.646799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.649131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.658003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.658658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.659167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.659177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.659184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.659337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.659482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.659490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.659496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.661671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.670776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.671423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.671995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.672008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.672016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.672164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.672311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.672320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.672327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.674482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.683237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.683892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.684472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.684509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.684520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.684664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.684774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.684783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.684791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.687051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.695802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.696605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.697102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.697115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.697125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.287 [2024-07-26 13:43:53.697255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.287 [2024-07-26 13:43:53.697402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.287 [2024-07-26 13:43:53.697411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.287 [2024-07-26 13:43:53.697419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.287 [2024-07-26 13:43:53.699756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.287 [2024-07-26 13:43:53.708333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.287 [2024-07-26 13:43:53.708912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.709569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.287 [2024-07-26 13:43:53.709606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.287 [2024-07-26 13:43:53.709617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.288 [2024-07-26 13:43:53.709761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.288 [2024-07-26 13:43:53.709925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.288 [2024-07-26 13:43:53.709934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.288 [2024-07-26 13:43:53.709942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.288 [2024-07-26 13:43:53.711934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.288 [2024-07-26 13:43:53.720660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.288 [2024-07-26 13:43:53.721411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.721784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.721797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.288 [2024-07-26 13:43:53.721806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.288 [2024-07-26 13:43:53.721950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.288 [2024-07-26 13:43:53.722059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.288 [2024-07-26 13:43:53.722068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.288 [2024-07-26 13:43:53.722075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.288 [2024-07-26 13:43:53.724420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.288 [2024-07-26 13:43:53.733112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.288 [2024-07-26 13:43:53.733781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.734158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.734169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.288 [2024-07-26 13:43:53.734176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.288 [2024-07-26 13:43:53.734343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.288 [2024-07-26 13:43:53.734505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.288 [2024-07-26 13:43:53.734515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.288 [2024-07-26 13:43:53.734522] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.288 [2024-07-26 13:43:53.736634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.288 [2024-07-26 13:43:53.745640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.288 [2024-07-26 13:43:53.745992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.746587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.288 [2024-07-26 13:43:53.746624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.288 [2024-07-26 13:43:53.746639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.288 [2024-07-26 13:43:53.746821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.288 [2024-07-26 13:43:53.746968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.288 [2024-07-26 13:43:53.746977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.288 [2024-07-26 13:43:53.746985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.288 [2024-07-26 13:43:53.749293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 [2024-07-26 13:43:53.758314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.758989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.759437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.759474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.759485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 [2024-07-26 13:43:53.759647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.759793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.551 [2024-07-26 13:43:53.759802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.551 [2024-07-26 13:43:53.759810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.551 [2024-07-26 13:43:53.762105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 [2024-07-26 13:43:53.770874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.771643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 13:43:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:56.551 [2024-07-26 13:43:53.772106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.772121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.772130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 13:43:53 -- common/autotest_common.sh@852 -- # return 0 00:32:56.551 [2024-07-26 13:43:53.772336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.772521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.551 13:43:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:56.551 [2024-07-26 13:43:53.772529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.551 [2024-07-26 13:43:53.772538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.551 13:43:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:56.551 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.551 [2024-07-26 13:43:53.775029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 [2024-07-26 13:43:53.783429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.783806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.784404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.784442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.784452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 [2024-07-26 13:43:53.784615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.784762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.551 [2024-07-26 13:43:53.784770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.551 [2024-07-26 13:43:53.784778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.551 [2024-07-26 13:43:53.787038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 [2024-07-26 13:43:53.796098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.796491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.797001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.797013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.797020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 [2024-07-26 13:43:53.797211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.797375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.551 [2024-07-26 13:43:53.797383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.551 [2024-07-26 13:43:53.797390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.551 [2024-07-26 13:43:53.799685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 [2024-07-26 13:43:53.808791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.809475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.810000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.810009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.810016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 [2024-07-26 13:43:53.810141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.810306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.551 [2024-07-26 13:43:53.810314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.551 [2024-07-26 13:43:53.810321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.551 [2024-07-26 13:43:53.812471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.551 13:43:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.551 13:43:53 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.551 13:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.551 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.551 [2024-07-26 13:43:53.820890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.551 [2024-07-26 13:43:53.821109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.551 [2024-07-26 13:43:53.821775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.822418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.551 [2024-07-26 13:43:53.822456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.551 [2024-07-26 13:43:53.822466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.551 [2024-07-26 13:43:53.822666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.551 [2024-07-26 13:43:53.822887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.822896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.822903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.825253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 13:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.552 13:43:53 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:56.552 13:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.552 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.552 [2024-07-26 13:43:53.833509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 [2024-07-26 13:43:53.834188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.834425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.834442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.552 [2024-07-26 13:43:53.834450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.552 [2024-07-26 13:43:53.834575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.552 [2024-07-26 13:43:53.834737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.834745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.834752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.837196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 [2024-07-26 13:43:53.845958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 [2024-07-26 13:43:53.846627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.847089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.847098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.552 [2024-07-26 13:43:53.847105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.552 [2024-07-26 13:43:53.847258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.552 [2024-07-26 13:43:53.847367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.847375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.847382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.849744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 [2024-07-26 13:43:53.858395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 [2024-07-26 13:43:53.859096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.859347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.859364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.552 [2024-07-26 13:43:53.859372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.552 [2024-07-26 13:43:53.859537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.552 [2024-07-26 13:43:53.859644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.859652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.859660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.861936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 Malloc0 00:32:56.552 13:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.552 13:43:53 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:56.552 13:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.552 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.552 [2024-07-26 13:43:53.870845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 [2024-07-26 13:43:53.871431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.871901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.871911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.552 [2024-07-26 13:43:53.871918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.552 [2024-07-26 13:43:53.872025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.552 [2024-07-26 13:43:53.872186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.872194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.872204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.874453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 13:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.552 13:43:53 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:56.552 13:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.552 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.552 [2024-07-26 13:43:53.883330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 [2024-07-26 13:43:53.883922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 13:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.552 13:43:53 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.552 [2024-07-26 13:43:53.884493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:56.552 [2024-07-26 13:43:53.884530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a26c0 with addr=10.0.0.2, port=4420 00:32:56.552 [2024-07-26 13:43:53.884545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a26c0 is same with the state(5) to be set 00:32:56.552 13:43:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:56.552 [2024-07-26 13:43:53.884727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a26c0 (9): Bad file descriptor 00:32:56.552 13:43:53 -- common/autotest_common.sh@10 -- # set +x 00:32:56.552 [2024-07-26 13:43:53.884911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:56.552 [2024-07-26 13:43:53.884921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:56.552 [2024-07-26 13:43:53.884929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.552 [2024-07-26 13:43:53.887163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:56.552 [2024-07-26 13:43:53.891132] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.552 [2024-07-26 13:43:53.895749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.552 13:43:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:56.552 13:43:53 -- host/bdevperf.sh@38 -- # wait 1182331 00:32:56.552 [2024-07-26 13:43:53.927590] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:06.625 00:33:06.625 Latency(us) 00:33:06.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.625 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:06.625 Verification LBA range: start 0x0 length 0x4000 00:33:06.625 Nvme1n1 : 15.00 13912.28 54.34 14420.61 0.00 4502.75 1372.16 15837.87 00:33:06.625 =================================================================================================================== 00:33:06.625 Total : 13912.28 54.34 14420.61 0.00 4502.75 1372.16 15837.87 00:33:06.625 13:44:02 -- host/bdevperf.sh@39 -- # sync 00:33:06.625 13:44:02 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:06.625 13:44:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:06.625 13:44:02 -- common/autotest_common.sh@10 -- # set +x 00:33:06.625 13:44:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:06.625 13:44:02 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:06.625 13:44:02 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:06.625 13:44:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:06.625 13:44:02 -- nvmf/common.sh@116 -- # sync 00:33:06.625 13:44:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:06.625 13:44:02 -- nvmf/common.sh@119 -- # set +e 00:33:06.625 13:44:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:06.625 13:44:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:06.625 rmmod nvme_tcp 00:33:06.625 rmmod nvme_fabrics 00:33:06.625 rmmod nvme_keyring 00:33:06.625 13:44:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:06.625 13:44:02 -- nvmf/common.sh@123 -- # set -e 00:33:06.625 13:44:02 -- nvmf/common.sh@124 -- # return 0 00:33:06.625 13:44:02 -- nvmf/common.sh@477 -- # '[' -n 1183462 ']' 00:33:06.625 13:44:02 -- nvmf/common.sh@478 -- # killprocess 1183462 00:33:06.625 13:44:02 -- common/autotest_common.sh@926 -- # '[' -z 1183462 ']' 00:33:06.625 13:44:02 -- common/autotest_common.sh@930 -- # kill -0 1183462 00:33:06.625 13:44:02 -- common/autotest_common.sh@931 -- # uname 00:33:06.625 13:44:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:06.625 13:44:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1183462 00:33:06.625 13:44:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:06.625 13:44:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:06.625 13:44:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1183462' 00:33:06.625 killing process with pid 1183462 00:33:06.625 13:44:02 -- common/autotest_common.sh@945 -- # kill 1183462 00:33:06.625 13:44:02 -- common/autotest_common.sh@950 -- # wait 1183462 00:33:06.625 13:44:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:06.625 13:44:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:06.625 13:44:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:06.625 13:44:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.625 13:44:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:06.625 13:44:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.625 13:44:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.625 13:44:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.569 13:44:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:07.569 00:33:07.569 real 0m27.515s 00:33:07.569 user 1m2.281s 00:33:07.569 sys 0m7.030s 00:33:07.569 13:44:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.569 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:07.569 ************************************ 00:33:07.569 END TEST nvmf_bdevperf 00:33:07.569 ************************************ 00:33:07.569 13:44:04 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:07.569 13:44:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:07.569 13:44:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.569 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:07.569 ************************************ 00:33:07.569 START TEST nvmf_target_disconnect 00:33:07.569 ************************************ 00:33:07.569 13:44:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:07.569 * Looking for test storage... 00:33:07.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.569 13:44:04 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.569 13:44:04 -- nvmf/common.sh@7 -- # uname -s 00:33:07.569 13:44:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.569 13:44:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.569 13:44:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.569 13:44:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.569 13:44:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.569 13:44:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.569 13:44:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.569 13:44:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.569 13:44:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.569 13:44:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.569 13:44:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:07.569 13:44:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:07.569 13:44:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.569 13:44:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.569 13:44:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.569 13:44:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.569 13:44:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.569 13:44:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.569 13:44:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.569 13:44:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.570 13:44:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.570 13:44:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.570 13:44:04 -- paths/export.sh@5 -- # export PATH 00:33:07.570 13:44:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.570 13:44:04 -- nvmf/common.sh@46 -- # : 0 00:33:07.570 13:44:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:07.570 13:44:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:07.570 13:44:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:07.570 13:44:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.570 13:44:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.570 13:44:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:07.570 13:44:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:07.570 13:44:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:07.570 13:44:04 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:07.570 13:44:04 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:07.570 13:44:04 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:07.570 13:44:04 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:07.570 13:44:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:07.570 13:44:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.570 13:44:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:07.570 13:44:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:07.570 13:44:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:07.570 13:44:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.570 13:44:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:07.570 13:44:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.570 13:44:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:07.570 13:44:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:07.570 13:44:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:07.570 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:33:15.713 13:44:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:15.713 13:44:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:15.713 13:44:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:15.713 13:44:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:15.713 13:44:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:15.713 13:44:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:15.713 13:44:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:15.713 13:44:11 -- nvmf/common.sh@294 -- # net_devs=() 00:33:15.713 13:44:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:15.713 13:44:11 -- nvmf/common.sh@295 -- # e810=() 00:33:15.713 13:44:11 -- nvmf/common.sh@295 -- # local -ga e810 00:33:15.713 13:44:11 -- nvmf/common.sh@296 -- # x722=() 00:33:15.713 13:44:11 -- nvmf/common.sh@296 -- # local -ga x722 00:33:15.713 13:44:11 -- nvmf/common.sh@297 -- # mlx=() 00:33:15.713 13:44:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:15.713 13:44:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.713 13:44:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:15.713 13:44:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:15.713 13:44:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:15.713 13:44:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:15.713 13:44:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:15.713 13:44:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:15.713 13:44:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:15.713 13:44:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:15.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:15.713 13:44:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:15.714 13:44:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:15.714 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:15.714 13:44:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:15.714 13:44:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:15.714 13:44:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.714 13:44:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:15.714 13:44:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.714 13:44:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:15.714 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:15.714 13:44:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.714 13:44:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:15.714 13:44:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.714 13:44:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:15.714 13:44:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.714 13:44:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:15.714 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:15.714 13:44:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.714 13:44:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:15.714 13:44:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:15.714 13:44:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:15.714 13:44:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:15.714 13:44:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.714 13:44:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.714 13:44:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.714 13:44:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:15.714 13:44:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.714 13:44:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.714 13:44:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:15.714 13:44:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.714 13:44:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.714 13:44:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:15.714 13:44:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:15.714 13:44:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.714 13:44:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.714 13:44:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.714 13:44:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.714 13:44:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:15.714 13:44:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.714 13:44:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.714 13:44:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.714 13:44:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:15.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:33:15.714 00:33:15.714 --- 10.0.0.2 ping statistics --- 00:33:15.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.714 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:33:15.714 13:44:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:33:15.714 00:33:15.714 --- 10.0.0.1 ping statistics --- 00:33:15.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.714 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:33:15.714 13:44:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.714 13:44:12 -- nvmf/common.sh@410 -- # return 0 00:33:15.714 13:44:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:15.714 13:44:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.714 13:44:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:15.714 13:44:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:15.714 13:44:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.714 13:44:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:15.714 13:44:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:15.714 13:44:12 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:15.714 13:44:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:15.714 13:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:15.714 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:33:15.714 ************************************ 00:33:15.714 START TEST nvmf_target_disconnect_tc1 00:33:15.714 ************************************ 00:33:15.714 13:44:12 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:15.714 13:44:12 -- host/target_disconnect.sh@32 -- # set +e 00:33:15.714 13:44:12 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.714 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.714 [2024-07-26 13:44:12.165407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.714 [2024-07-26 13:44:12.165951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.714 [2024-07-26 13:44:12.165966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c3f10 with addr=10.0.0.2, port=4420 00:33:15.714 [2024-07-26 13:44:12.165995] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:15.714 [2024-07-26 13:44:12.166012] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:15.714 [2024-07-26 13:44:12.166020] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:15.714 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:15.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:15.714 Initializing NVMe Controllers 00:33:15.714 13:44:12 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:15.714 13:44:12 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:15.714 13:44:12 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:15.714 13:44:12 -- common/autotest_common.sh@1132 -- # return 0 00:33:15.714 13:44:12 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:15.714 13:44:12 -- host/target_disconnect.sh@41 -- # set -e 00:33:15.714 00:33:15.714 real 0m0.098s 00:33:15.714 user 0m0.045s 00:33:15.714 sys 0m0.051s 00:33:15.714 13:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:15.714 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:33:15.714 ************************************ 00:33:15.714 END TEST nvmf_target_disconnect_tc1 00:33:15.714 ************************************ 00:33:15.714 13:44:12 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:15.714 13:44:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:15.714 13:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:15.714 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:33:15.714 ************************************ 00:33:15.714 START TEST nvmf_target_disconnect_tc2 00:33:15.714 ************************************ 00:33:15.714 13:44:12 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:15.714 13:44:12 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:15.714 13:44:12 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:15.714 13:44:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:15.714 13:44:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:15.714 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:33:15.714 13:44:12 -- nvmf/common.sh@469 -- # nvmfpid=1189524 00:33:15.714 13:44:12 -- nvmf/common.sh@470 -- # waitforlisten 1189524 00:33:15.714 13:44:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:15.714 13:44:12 -- common/autotest_common.sh@819 -- # '[' -z 1189524 ']' 00:33:15.714 13:44:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.714 13:44:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:15.714 13:44:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.714 13:44:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:15.714 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:33:15.714 [2024-07-26 13:44:12.276590] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:15.714 [2024-07-26 13:44:12.276647] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.714 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.714 [2024-07-26 13:44:12.366361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:15.714 [2024-07-26 13:44:12.412474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:15.714 [2024-07-26 13:44:12.412627] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.714 [2024-07-26 13:44:12.412635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.714 [2024-07-26 13:44:12.412643] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.714 [2024-07-26 13:44:12.412812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:15.715 [2024-07-26 13:44:12.412964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:15.715 [2024-07-26 13:44:12.413124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:15.715 [2024-07-26 13:44:12.413125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:15.715 13:44:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:15.715 13:44:13 -- common/autotest_common.sh@852 -- # return 0 00:33:15.715 13:44:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:15.715 13:44:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 13:44:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.715 13:44:13 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:15.715 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 Malloc0 00:33:15.715 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.715 13:44:13 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:15.715 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 [2024-07-26 13:44:13.140836] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.715 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.715 13:44:13 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:15.715 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.715 13:44:13 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:15.715 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.715 13:44:13 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.715 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.715 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.715 [2024-07-26 13:44:13.181273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.975 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.975 13:44:13 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:15.975 13:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:15.975 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:33:15.975 13:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:15.975 13:44:13 -- host/target_disconnect.sh@50 -- # reconnectpid=1189560 00:33:15.975 13:44:13 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:15.975 13:44:13 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:15.975 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.891 13:44:15 -- host/target_disconnect.sh@53 -- # kill -9 1189524 00:33:17.891 13:44:15 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Read completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.891 starting I/O failed 00:33:17.891 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Read completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 Write completed with error (sct=0, sc=8) 00:33:17.892 starting I/O failed 00:33:17.892 [2024-07-26 13:44:15.221957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.892 [2024-07-26 13:44:15.222499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.222851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.222870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.223455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.223939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.223959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.224445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.224964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.224978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.225433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.225932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.225946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.226492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.226862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.226875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.227104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.227444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.227455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.227779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.228121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.228131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.228636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.228995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.229005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.229175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.229663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.229673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.230056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.230676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.230712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.231229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.231780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.231790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.232102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.232514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.232529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.233043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.233496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.233533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.233991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.234503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.234540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.235088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.235590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.235600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.235940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.236518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.236555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.236936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.237440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.237476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.237876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.238267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.238278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.238654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.239039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.239049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.239638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.240109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.240118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.240684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.241135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.241144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.241669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.242064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.242083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.242549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.243049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.243059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.243539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.243939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.243951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.892 qpair failed and we were unable to recover it. 00:33:17.892 [2024-07-26 13:44:15.244505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.245003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.892 [2024-07-26 13:44:15.245016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.245399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.245929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.245945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.246536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.246976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.246992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.247569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.248090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.248106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.248267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.248824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.248837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.249295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.249815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.249827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.250291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.250784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.250796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.251278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.251761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.251778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.252254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.252418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.252435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.252951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.253324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.253336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.253651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.254160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.254172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.254515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.254980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.254992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.255629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.256036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.256047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.256635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.257165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.257186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.257836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.258404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.258459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.258970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.259582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.259637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.260079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.260566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.260583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.260931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.261512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.261567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.262083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.262646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.262665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.262980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.263561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.263616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.264180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.264632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.264649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.265011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.265564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.265618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.266001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.266473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.266528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.267064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.267556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.267611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.267919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.268419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.268438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.268921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.269380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.269401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.269750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.270163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.270183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.893 qpair failed and we were unable to recover it. 00:33:17.893 [2024-07-26 13:44:15.270704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.893 [2024-07-26 13:44:15.271144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.271164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.271709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.272179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.272199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.272730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.273223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.273244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.273789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.274177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.274197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.274744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.275230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.275251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.275762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.276453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.276522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.277145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.277643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.277709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.278406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.278965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.278992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.279582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.280138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.280165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.280735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.281218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.281239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.281510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.281996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.282016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.282494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.283092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.283129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.283681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.284188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.284226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.284711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.285191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.285228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.285717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.286193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.286231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.286815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.287467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.287554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.287875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.288461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.288547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.289133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.289718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.289750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.290394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.290980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.291017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.291559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.291934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.291962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.292537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.293015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.293042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.293332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.293890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.293917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.294425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.294904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.294931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.295440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.295919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.295946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.296561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.297149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.297185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.297720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.298211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.298240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.298719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.299191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.299245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.894 qpair failed and we were unable to recover it. 00:33:17.894 [2024-07-26 13:44:15.299758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.894 [2024-07-26 13:44:15.300238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.300270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.300790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.301403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.301490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.302074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.302566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.302596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.302997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.303520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.303607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.304197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.304634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.304675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.305091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.305650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.305681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.306077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.306462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.306499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.307015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.307594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.307681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.308148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.308667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.308697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.309095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.309507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.309537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.310044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.310521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.310549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.311058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.311538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.311566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.312063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.312555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.312584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.313074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.313587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.313616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.314121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.314604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.314632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.315129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.315521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.315549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.316076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.316564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.316592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.316990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.317477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.317506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.318014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.318656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.318744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.319226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.319775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.319805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.320448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.321046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.321082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.321501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.321999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.322026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.322612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.323263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.323315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.323860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.324343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.324372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.324862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.325340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.325369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.325872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.326375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.326402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.326803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.327363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.327391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.895 qpair failed and we were unable to recover it. 00:33:17.895 [2024-07-26 13:44:15.327871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.895 [2024-07-26 13:44:15.328347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.328376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.328879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.329386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.329414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.329924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.330327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.330354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.330861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.331255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.331295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.331791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.332299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.332327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.332829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.333308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.333336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.333830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.334310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.334337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.334737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.335241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.335269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.335779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.336252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.336281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.336778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.337258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.337287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.337780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.338256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.338283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.338795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.339276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.339304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.339810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.340298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.340325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.340721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.341241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.341271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.341702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.342091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.342117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.342416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.342918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.342944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.343428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.343943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.343970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.344488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.344996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.345023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.345608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.346198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.346272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.346777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.347437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.347525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.348004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.348603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.348690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.896 [2024-07-26 13:44:15.349396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.349985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.896 [2024-07-26 13:44:15.350022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.896 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.350550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.351043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.351071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.351568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.352042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.352069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.352564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.353118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.353145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.353725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.354080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.354107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.354647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.355137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.355163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.355671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.356188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.356224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.356593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.357101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.357128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.357716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.358442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.358530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.359139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.359621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.359651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.360158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.360676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.897 [2024-07-26 13:44:15.360707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:17.897 qpair failed and we were unable to recover it. 00:33:17.897 [2024-07-26 13:44:15.361218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.361720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.361749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.362236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.362662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.362689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.363171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.363651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.363680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.364192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.364618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.364659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.365150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.365718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.365808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.366409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.366925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.366955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.367562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.368072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.368109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.368646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.369098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.369126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.369513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.369893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.369921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.370322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.370714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.370741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.371247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.371608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.371637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.372146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.372631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.372660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.373176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.373689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.373717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.374221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.374714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.374741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.375226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.375723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.375750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.376314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.376689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.376717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.377223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.377725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.377752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.378258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.378744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.165 [2024-07-26 13:44:15.378771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.165 qpair failed and we were unable to recover it. 00:33:18.165 [2024-07-26 13:44:15.379272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.379647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.379674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.380159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.380650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.380680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.380948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.381426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.381454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.381964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.382441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.382469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.382967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.383423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.383451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.383854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.384402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.384431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.384941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.385459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.385548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.386154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.386721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.386763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.387408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.387999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.388036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.388559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.389055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.389083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.389684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.390053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.390081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.390606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.391084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.391111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.391657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.392138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.392166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.392689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.393216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.393244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.393747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.394257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.394298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.394822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.395444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.395534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.396138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.396638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.396670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.397176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.397575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.397613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.398110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.398613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.398642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.399142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.399490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.399579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.400177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.400711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.400740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.401259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.401742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.401769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.402262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.402774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.402801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.403294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.403681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.403708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.404289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.404795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.404822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.405317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.405702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.405729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.166 qpair failed and we were unable to recover it. 00:33:18.166 [2024-07-26 13:44:15.406233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.406661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.166 [2024-07-26 13:44:15.406688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.407197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.407716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.407754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.408265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.408756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.408782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.409273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.409671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.409703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.410220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.410742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.410769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.411304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.411795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.411822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.412316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.412821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.412848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.413338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.413864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.413891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.414386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.414785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.414812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.415328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.415804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.415831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.416343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.416835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.416862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.417371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.417829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.417862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.418277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.418800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.418829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.419339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.419823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.419850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.420370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.420869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.420896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.421297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.421809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.421836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.422347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.422844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.422874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.423283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.423774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.423803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.424296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.424686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.424719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.425227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.425739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.425767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.426295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.426786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.426813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.427308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.427765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.427792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.428327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.428911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.428939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.429453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.429942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.429969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.430464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.430983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.431010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.431612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.432231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.432270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.167 qpair failed and we were unable to recover it. 00:33:18.167 [2024-07-26 13:44:15.432698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.167 [2024-07-26 13:44:15.433224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.433255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.433761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.434265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.434309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.434821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.435166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.435193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.435719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.436194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.436231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.436708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.437226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.437255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.437793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.438188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.438234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.438767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.439246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.439274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.439776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.440437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.440528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.441128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.441738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.441829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.442519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.443119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.443156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.443706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.444191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.444230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.444786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.445275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.445321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.445756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.446395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.446486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.447096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.447598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.447628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.448140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.448662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.448691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.449236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.449746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.449773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.450392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.450986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.451024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.451464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.452009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.452037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.452638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.453121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.453160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.453711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.454267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.454314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.454805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.455328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.455357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.455874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.456322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.456350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.456749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.457253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.457281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.457784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.458265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.458294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.458688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.459217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.459245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.459671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.460155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.168 [2024-07-26 13:44:15.460183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.168 qpair failed and we were unable to recover it. 00:33:18.168 [2024-07-26 13:44:15.460611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.461112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.461139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.461647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.462018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.462045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.462551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.462938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.462969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.463541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.464015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.464042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.464542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.465047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.465073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.465598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.466079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.466106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.466617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.467139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.467167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.467694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.468176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.468209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.468710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.469191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.469227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.469743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.470241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.470270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.470703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.471221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.471251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.471693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.472193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.472230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.472736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.473223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.473250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.473656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.474062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.474090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.474610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.475142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.475170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.475724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.476113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.476140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.476523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.477033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.477060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.477578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.478253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.478294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.478841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.479480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.479571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.480108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.480717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.480812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.481502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.482159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.482197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.482734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.483117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.483144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.483647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.484017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.484044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.484585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.485069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.485096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.485611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.486008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.486035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.486638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.487269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.487325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.169 qpair failed and we were unable to recover it. 00:33:18.169 [2024-07-26 13:44:15.487926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.169 [2024-07-26 13:44:15.488413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.488443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.488987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.489573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.489665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.490111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.490626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.490656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.491199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.491721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.491749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.492135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.492744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.492836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.493502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.494119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.494156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.494710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.495216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.495246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.495798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.496462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.496555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.497168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.497833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.497927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.498619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.499238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.499278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.499822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.500461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.500553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.501162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.501670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.501703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.502118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.502682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.502776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.503487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.504108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.504148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.504817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.505209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.505240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.505804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.506450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.506543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.507065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.507588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.507619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.508036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.508525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.508617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.509193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.509718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.509748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.170 [2024-07-26 13:44:15.510451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.514238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.170 [2024-07-26 13:44:15.514305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.170 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.514901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.515440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.515474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.515961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.516485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.516514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.516914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.517507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.517535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.518030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.518639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.518737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.519475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.520102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.520141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.520741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.521446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.521539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.522145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.522678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.522708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.523269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.523807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.523835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.524447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.525097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.525135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.525671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.526162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.526190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.526701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.527194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.527234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.527780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.528458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.528552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.529164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.529751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.529782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.530456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.531098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.531136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.531780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.532479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.532573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.533168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.533628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.533670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.534077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.534475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.534504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.535003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.535605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.535701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.536221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.536647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.536688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.537294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.537731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.537760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.538167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.538711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.538743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.539152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.539768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.539861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.540586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.541221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.541260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.541777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.542279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.171 [2024-07-26 13:44:15.542328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.171 qpair failed and we were unable to recover it. 00:33:18.171 [2024-07-26 13:44:15.542849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.543495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.543593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.544083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.544528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.544558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.545065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.545491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.545519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.546022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.546513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.546543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.547071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.547574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.547602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.548011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.548614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.548708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.549412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.550018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.550055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.550487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.550876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.550905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.551132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.551545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.551573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.552122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.552531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.552559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.553083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.553670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.553699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.554233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.554686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.554718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.555267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.555783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.555811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.556052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.556607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.556635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.557170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.557747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.557777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.558453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.558950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.558986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.559538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.560031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.560059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.560502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.561089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.561117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.561519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.562028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.562055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.562620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.563146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.563173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.563695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.564187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.564235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.564763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.565277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.565324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.565871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.566447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.566543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.567129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.567631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.172 [2024-07-26 13:44:15.567662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.172 qpair failed and we were unable to recover it. 00:33:18.172 [2024-07-26 13:44:15.568222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.568725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.568753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.569463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.570116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.570154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.570709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.571119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.571146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.571705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.572272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.572335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.572894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.573413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.573443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.573944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.574585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.574681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.575411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.575942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.575990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.576518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.577032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.577061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.577582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.578080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.578109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.578520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.579007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.579034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.579670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.580488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.580583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.581187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.581741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.581771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.582471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.582968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.583006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.583588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.584109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.584146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.584712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.585262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.585311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.585864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.586355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.586384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.586922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.587579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.587689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.588260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.588785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.588814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.589335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.589852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.589879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.590300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.590841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.590869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.591391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.591820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.591848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.592424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.592807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.592834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.593357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.593880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.593908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.594449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.594800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.594828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.595322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.595815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.595843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.596353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.596786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.596831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.173 qpair failed and we were unable to recover it. 00:33:18.173 [2024-07-26 13:44:15.597331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.597844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.173 [2024-07-26 13:44:15.597882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.598396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.598702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.598738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.599283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.599692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.599729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.600283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.600864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.600892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.601400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.601823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.601850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.602274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.602700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.602727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.603238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.603757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.603784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.604191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.604839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.604867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.605402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.605785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.605817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.606353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.606865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.606893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.607409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.607799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.607832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.608372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.608886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.608913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.609352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.609901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.609928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.610430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.610922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.610950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.611522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.612007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.612036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.612728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.613232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.613272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.613667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.614189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.614231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.614733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.615133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.615166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.615732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.616224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.616255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.616769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.617299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.617327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.617905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.618561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.618657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.619272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.619828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.619859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.620382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.620760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.620786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.621301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.621797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.621824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.174 qpair failed and we were unable to recover it. 00:33:18.174 [2024-07-26 13:44:15.622351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.174 [2024-07-26 13:44:15.622847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.622878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.623429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.623945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.623973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.624548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.625044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.625071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.625592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.626093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.626121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.626609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.627113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.627141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.627665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.628151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.628179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.628721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.631071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.631141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.175 [2024-07-26 13:44:15.631686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.632221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.175 [2024-07-26 13:44:15.632250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.175 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.632787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.633137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.633164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.633718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.634222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.634251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.634783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.635276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.635306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.635848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.636479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.636580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.637192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.637777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.637806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.638479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.639110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.639147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.639747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.640284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.640336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.640874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.641496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.641595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.642170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.642755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.642785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.643325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.643884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.643913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.644518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.645137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.645176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.645720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.646236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.646267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.646824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.647465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.647568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.648191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.648811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.648840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 [2024-07-26 13:44:15.649487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.650116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.442 [2024-07-26 13:44:15.650154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff418000b90 with addr=10.0.0.2, port=4420 00:33:18.442 qpair failed and we were unable to recover it. 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Write completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Write completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Write completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Write completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Write completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.442 Read completed with error (sct=0, sc=8) 00:33:18.442 starting I/O failed 00:33:18.443 Read completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Read completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Read completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Read completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Read completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 Write completed with error (sct=0, sc=8) 00:33:18.443 starting I/O failed 00:33:18.443 [2024-07-26 13:44:15.650536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:18.443 [2024-07-26 13:44:15.651097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.651600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.651662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.652076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.652548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.652610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.653121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.653784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.653846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.654451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.655009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.655024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.655535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.656089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.656109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.656791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.657445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.657506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.657905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.658489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.658550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.659075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.659587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.659599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.660074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.660598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.660659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.661192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.661687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.661698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.662145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.662713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.662775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.663424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.664019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.664035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.664650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.665213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.665230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.665772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.666426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.666487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.667003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.667590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.667651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.668173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.668730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.668793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.669409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.669967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.669982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.670615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.671175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.671193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.671722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.672213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.672224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.672729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.673120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.673131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.673712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.674220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.674236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.674739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.675225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.675237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.675846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.676506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.676569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.443 qpair failed and we were unable to recover it. 00:33:18.443 [2024-07-26 13:44:15.677091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.443 [2024-07-26 13:44:15.677683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.677745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.678423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.678985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.679000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.679640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.680217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.680234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.680742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.681139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.681149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.681786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.682438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.682500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.683027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.683518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.683579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.684083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.684600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.684611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.685099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.685611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.685673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.686186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.686674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.686687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.687180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.687783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.687846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.688413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.688970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.688986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.689569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.690095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.690111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.690617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.691099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.691110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.691604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.692092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.692102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.692589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.692965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.692994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.693512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.694002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.694014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.694573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.695131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.695152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.695485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.696000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.696013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.696595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.697157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.697171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.697699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.698185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.698197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.698803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.699473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.699537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.700126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.700705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.700767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.701148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.701754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.701816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.702471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.703026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.703041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.703654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.704423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.704484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.704975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.705559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.705621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.706143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.706633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.706644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.444 [2024-07-26 13:44:15.707127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.707708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.444 [2024-07-26 13:44:15.707770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.444 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.708399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.708917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.708932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.709554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.710111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.710127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.710671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.711157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.711167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.711845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.712497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.712559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.713073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.713651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.713713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.714426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.714953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.714969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.715626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.716224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.716240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.716789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.717399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.717460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.717981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.718568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.718631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.719170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.719749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.719812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.720498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.721078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.721092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.721664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.722142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.722153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.722790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.723503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.723565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.724120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.724614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.724627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.725110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.725691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.725753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.726435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.726897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.726912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.727359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.727759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.727769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.728248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.728757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.728767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.729241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.729721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.729731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.730210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.730707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.730717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.731192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.731731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.731742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.732424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.732996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.733010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.733647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.734216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.734232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.734728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.735029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.735040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.735626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.736101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.736117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.736770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.737213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.737228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.737748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.738134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.738146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.445 qpair failed and we were unable to recover it. 00:33:18.445 [2024-07-26 13:44:15.738720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.445 [2024-07-26 13:44:15.739390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.739451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.739977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.740492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.740555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.741114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.741817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.741878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.742501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.743067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.743083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.743628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.743929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.743938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.744533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.745101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.745118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.745525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.746050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.746061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.746651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.747241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.747284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.747714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.748114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.748124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.748503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.748867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.748877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.749375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.749858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.749868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.750247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.750734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.750744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.751113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.751588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.751605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.752081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.752639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.752650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.753028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.753644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.753706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.754244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.754638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.754649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.755077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.755495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.755505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.756006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.756591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.756653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.757179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.757785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.757847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.758494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.758958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.758976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.759588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.760155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.760170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.760680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.761164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.761175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.761792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.762446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.762508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.763047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.763622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.763684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.764199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.764811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.764872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.765556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.766120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.766135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.766763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.767427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.767500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.446 qpair failed and we were unable to recover it. 00:33:18.446 [2024-07-26 13:44:15.768074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.768557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.446 [2024-07-26 13:44:15.768619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.769152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.769737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.769800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.770410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.770966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.770981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.771587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.772136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.772151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.772570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.773061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.773074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.773647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.774401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.774462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.774995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.775583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.775645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.776157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.776673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.776735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.777425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.777952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.777967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.778547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.779111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.779125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.779628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.780186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.780198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.780672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.781055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.781065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.781634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.782164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.782179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.782826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.783490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.783552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.784063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.784645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.784706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.785228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.785745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.785757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.786434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.786819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.786848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.787454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.788011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.788025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.788635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.789210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.789227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.789731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.790226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.790237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.790623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.791003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.791013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.791442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.792008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.792023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.792642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.793058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.447 [2024-07-26 13:44:15.793073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.447 qpair failed and we were unable to recover it. 00:33:18.447 [2024-07-26 13:44:15.793539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.794111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.794128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.794412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.794920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.794932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.795446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.795930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.795941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.796525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.797095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.797109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.797599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.798124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.798135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.798638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.799118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.799129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.799596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.800138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.800155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.800683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.801148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.801159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.801730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.802395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.802457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.802993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.803582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.803644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.804174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.804803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.804864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.805489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.806046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.806060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.806672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.807245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.807287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.807810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.808404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.808473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.808995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.809574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.809635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.810096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.810616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.810629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.811057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.811644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.811705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.812150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.812800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.812858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.813425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.813837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.813852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.814469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.815015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.815030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.815641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.816086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.816100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.816613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.817092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.817102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.817628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.818105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.818115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.818623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.819141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.819157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.819719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.820369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.820425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.820978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.821556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.821614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.821922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.822421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.822433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.448 [2024-07-26 13:44:15.822903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.823146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.448 [2024-07-26 13:44:15.823163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.448 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.823687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.824166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.824177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.824799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.825445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.825503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.826009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.826589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.826646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.827157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.827800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.827857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.828405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.828959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.828973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.829596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.830143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.830157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.830842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.831438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.831495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.832078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.832569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.832580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.833068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.833642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.833700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.834218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.834765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.834822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.835430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.835893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.835908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.836529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.837090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.837104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.837606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.838090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.838100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.838663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.839132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.839141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.839717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.840395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.840452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.840961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.841518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.841574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.842120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.842603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.842614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.843091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.843588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.843598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.844068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.844648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.844706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.845360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.845780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.845797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.846325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.846810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.846820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.847291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.847663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.847673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.848147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.848626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.848636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.848992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.849585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.849642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.850045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.850669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.850727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.851121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.851491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.851501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.852018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.852518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.852574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.449 qpair failed and we were unable to recover it. 00:33:18.449 [2024-07-26 13:44:15.852978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.449 [2024-07-26 13:44:15.853557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.853614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.854007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.854503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.854558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.855139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.855624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.855636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.856112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.856666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.856724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.857413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.857960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.857975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.858491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.859042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.859056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.859667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.860183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.860197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.860575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.861100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.861109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.861679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.862208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.862223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.862631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.863148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.863158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.863766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.864381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.864438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.864960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.865617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.865675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.866186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.866771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.866829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.867434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.867983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.867997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.868564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.869114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.869129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.869674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.870155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.870165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.870755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.871394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.871462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.871952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.872575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.872632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.873150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.873724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.873782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.874395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.874906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.874926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.875492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.876044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.876058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.876599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.877147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.877162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.877773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.878417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.878473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.878979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.879550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.879609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.880100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.880600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.880611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.881149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.881638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.881696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.882215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.882666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.882676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.883058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.883601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.450 [2024-07-26 13:44:15.883658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.450 qpair failed and we were unable to recover it. 00:33:18.450 [2024-07-26 13:44:15.883967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.884535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.884593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.885105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.885603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.885615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.886084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.886578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.886589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.887060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.887635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.887693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.888144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.888716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.888774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.889410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.890012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.890026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.890625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.891243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.891282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.891778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.892259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.892270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.892745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.893225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.893236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.893716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.894192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.894211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.894688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.895163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.895175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.895688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.896139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.896149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.896730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.897420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.897477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.897973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.898545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.898604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.899110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.899594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.899606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.900079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.900584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.900596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.901095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.901289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.901313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.901834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.902308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.902318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.902707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.903194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.903212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.903700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.904179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.904189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.904666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.905140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.905151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.905775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.906116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.906144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.906712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.907080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.451 [2024-07-26 13:44:15.907092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.451 qpair failed and we were unable to recover it. 00:33:18.451 [2024-07-26 13:44:15.907656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.452 [2024-07-26 13:44:15.908210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.452 [2024-07-26 13:44:15.908226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.452 qpair failed and we were unable to recover it. 00:33:18.452 [2024-07-26 13:44:15.908605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.452 [2024-07-26 13:44:15.908977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.452 [2024-07-26 13:44:15.908987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.452 qpair failed and we were unable to recover it. 00:33:18.452 [2024-07-26 13:44:15.909591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.910142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.910161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.910777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.911189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.911212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.911722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.912205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.912216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.912761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.913436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.913494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.914009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.914606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.914664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.915177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.915745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.915803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.916407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.916953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.916969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.917533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.918101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.918116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.918500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.919017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.919028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.919571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.920015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.920030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.920517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.921064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.921079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.921479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.921995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.922005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.922571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.923079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.923094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.923606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.924071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.924081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.924590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.925097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.718 [2024-07-26 13:44:15.925107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.718 qpair failed and we were unable to recover it. 00:33:18.718 [2024-07-26 13:44:15.925597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.926070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.926081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.926590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.927031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.927042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.927634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.928166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.928195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.928723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.929197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.929217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.929790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.930439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.930496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.930949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.931433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.931490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.931998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.932567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.932624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.933131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.933649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.933706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.934230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.934615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.934629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.934972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.935564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.935621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.936173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.936752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.936809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.937438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.937966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.937981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.938607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.939149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.939164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.939627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.940107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.940117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.940684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.941372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.941429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.941931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.942511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.942567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.943082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.943585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.943595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.944064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.944718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.944775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.945383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.945802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.945816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.946329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.946810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.946820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.947286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.947765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.947774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.948243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.948722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.948732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.949197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.949599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.949608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.950071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.950646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.950704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.951443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.951985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.951999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.719 qpair failed and we were unable to recover it. 00:33:18.719 [2024-07-26 13:44:15.952604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.719 [2024-07-26 13:44:15.953208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.953223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.953714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.954192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.954204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.954748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.955398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.955455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.955965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.956540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.956598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.957101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.957613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.957625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.958117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.958656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.958714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.959225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.959737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.959747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.960217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.960563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.960573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.961065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.961568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.961579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.962046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.962615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.962671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.963173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.963857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.963914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.964519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.964886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.964914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.965519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.966076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.966090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.966531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.967018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.967028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.967594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.968177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.968192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.968735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.969395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.969453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.969959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.970535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.970593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.971102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.971591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.971603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.972081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.972589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.972599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.972955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.973557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.973614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.974001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.974587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.974643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.975431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.975951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.975966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.976567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.977126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.977140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.977696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.978154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.978168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.978663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.979130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.979145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.979561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.980038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.980048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.980620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.981169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.981185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.720 qpair failed and we were unable to recover it. 00:33:18.720 [2024-07-26 13:44:15.981656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.720 [2024-07-26 13:44:15.982178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.982193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.982601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.982980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.982996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.983495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.983938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.983952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.984564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.985114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.985128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.985623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.985986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.985996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.986604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.987160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.987174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.987789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.988432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.988490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.989010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.989472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.989530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.990044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.990532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.990590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.991081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.991588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.991599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.992069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.992447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.992502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.993038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.993638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.993695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.994224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.994621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.994632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.995105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.995612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.995623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.996141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.996624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.996636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.997009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.997601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.997658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.998174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.998666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.998723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:15.999422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.999835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:15.999851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.000379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.000860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.000870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.001511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.002069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.002083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.002604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.003157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.003167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.003630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.004118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.004127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.004693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.005060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.005088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.005484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.005964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.005977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.006351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.006643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.006661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.007195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.007664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.007674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.008078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.008715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.008768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.721 qpair failed and we were unable to recover it. 00:33:18.721 [2024-07-26 13:44:16.009372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.009919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.721 [2024-07-26 13:44:16.009932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.010535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.011111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.011125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.011586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.012109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.012119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.012607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.013082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.013091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.013497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.013847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.013857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.014472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.015018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.015033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.015630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.016043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.016058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.016640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.017093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.017107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.017591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.018061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.018071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.018685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.019180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.019194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.019792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.020474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.020527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.021110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.021663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.021717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.022227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.022719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.022729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.023192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.023691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.023745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.024420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.024961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.024975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.025456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.025988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.026003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.026590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.027128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.027142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.027705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.028415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.028469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.028866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.029359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.029370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.029856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.030393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.030403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.030875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.031469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.031523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.032071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.032700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.032753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.033425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.033966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.033980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.034577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.035123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.035137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.035611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.035974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.035984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.036475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.036904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.036923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.037511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.038051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.038065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.038531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.039008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.039017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.722 qpair failed and we were unable to recover it. 00:33:18.722 [2024-07-26 13:44:16.039561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.722 [2024-07-26 13:44:16.040104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.040118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.040635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.041110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.041120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.041607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.042080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.042090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.042595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.043068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.043079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.043468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.043939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.043949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.044435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.044842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.044855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.045362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.045856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.045866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.046329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.046789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.046806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.047305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.047690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.047700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.048167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.048643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.048653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.049119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.049579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.049589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.050087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.050577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.050586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.051051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.051618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.051672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.052177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.052767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.052820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.053436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.053947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.053962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.054589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.055133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.055147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.055800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.056459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.056513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.056959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.057568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.057622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.058182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.058661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.058672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.059144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.059719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.059772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.060451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.060994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.061008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.061604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.062146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.062161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.062650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.063190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.723 [2024-07-26 13:44:16.063219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.723 qpair failed and we were unable to recover it. 00:33:18.723 [2024-07-26 13:44:16.063730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.064212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.064223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.064884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.065517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.065571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.066077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.066594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.066648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.067155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.067723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.067777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.068416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.068769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.068795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.069308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.069788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.069798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.070059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.070555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.070566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.071041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.071613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.071667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.072176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.072814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.072868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.073168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.073741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.073754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.074220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.074709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.074719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.075189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.075662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.075672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.076063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.076623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.076677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.077184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.077752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.077805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.078435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.078978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.078992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.079582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.080105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.080118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.080754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.081401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.081454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.081954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.082526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.082579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.083087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.083584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.083595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.084058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.084625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.084678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.085196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.085648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.085701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.086196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.086782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.086835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.087447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.088025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.088039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.088642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.089179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.089194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.089812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.090449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.090502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.091004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.091579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.091630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.092158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.092772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.092828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.724 qpair failed and we were unable to recover it. 00:33:18.724 [2024-07-26 13:44:16.093456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.094030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.724 [2024-07-26 13:44:16.094046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.094640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.095227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.095244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.095739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.096427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.096482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.096973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.097563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.097618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.098144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.098755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.098812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.099195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.099779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.099834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.100431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.100963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.100979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.101569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.102143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.102160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.102734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.103425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.103487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.103858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.104312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.104327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.104788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.105261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.105274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.105649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.106165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.106178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.106544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.107039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.107051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.107631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.108216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.108234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.108739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.109415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.109471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.110006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.110579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.110635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.111172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.111786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.111843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.112457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.113025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.113042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.113623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.114199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.114225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.114705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.115240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.115272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.115774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.116414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.116470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.116998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.117569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.117625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.118142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.118645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.118659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.119144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.119759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.119816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.120415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.120990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.121006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.121608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.122186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.122210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.122703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.123223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.123237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.123722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.124189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.725 [2024-07-26 13:44:16.124207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.725 qpair failed and we were unable to recover it. 00:33:18.725 [2024-07-26 13:44:16.124481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.125007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.125020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.125605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.126177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.126193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.126723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.127348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.127404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.127697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.128218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.128233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.128711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.129183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.129196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.129718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.130365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.130421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.130983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.131597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.131651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.132185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.132674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.132727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.133410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.133976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.133992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.134582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.135108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.135124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.135631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.136147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.136159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.136673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.137415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.137467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.137992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.138563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.138616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.139161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.139800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.139852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.140451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.140972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.140988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.141579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.142104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.142120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.142623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.143137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.143149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.143733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.144418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.144471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.144980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.145592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.145645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.146153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.146605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.146658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.147148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.147621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.147635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.148115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.148728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.148780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.149400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.149972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.149988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.150575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.151147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.151164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.151666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.152070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.152083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.152583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.153099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.153111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.153594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.154118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.154130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.726 [2024-07-26 13:44:16.154709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.155392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-07-26 13:44:16.155442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.726 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.155961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.156529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.156580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.157082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.157599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.157611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.158107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.158590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.158603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.158959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.159427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.159483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.159974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.160586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.160636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.161125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.161598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.161610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.161976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.162456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.162507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.162905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.163490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.163540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.164035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.164629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.164680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.165181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.165786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.165837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.166473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.166992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.167008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.167596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.168115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.168131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.168712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.169389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.169439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.169931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.170430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.170481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.170986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.171596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.171647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.172135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.172635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.172648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.173118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.173656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.173707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.174199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.174692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.174705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.175211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.175814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.175863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.176450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.176972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.176988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.177569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.178129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.178145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.178761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.179417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.179468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.179964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.180410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.180462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.180957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.181524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.181574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.182067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.182638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.182689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.183224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.183759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.183772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.184415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.184982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.184998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.727 qpair failed and we were unable to recover it. 00:33:18.727 [2024-07-26 13:44:16.185590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-07-26 13:44:16.186121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.728 [2024-07-26 13:44:16.186137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.728 qpair failed and we were unable to recover it. 00:33:18.993 [2024-07-26 13:44:16.186791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.187455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.187507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.993 qpair failed and we were unable to recover it. 00:33:18.993 [2024-07-26 13:44:16.188078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.188525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.188539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.993 qpair failed and we were unable to recover it. 00:33:18.993 [2024-07-26 13:44:16.188897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.189414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.189464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.993 qpair failed and we were unable to recover it. 00:33:18.993 [2024-07-26 13:44:16.189976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.190586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.993 [2024-07-26 13:44:16.190635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.993 qpair failed and we were unable to recover it. 00:33:18.993 [2024-07-26 13:44:16.191157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.191645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.191659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.192162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.192654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.192705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.193214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.193685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.193698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.194178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.194721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.194772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.195423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.195983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.195999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.196583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.197148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.197164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.197657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.198207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.198224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.198724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.199349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.199401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.199879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.200488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.200538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.201074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.201683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.201733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.202418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.202949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.202965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.203554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.204090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.204106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.204603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.205126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.205138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.205737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.206414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.206465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.206967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.207562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.207614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.208106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.208582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.208595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.209082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.209553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.209566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.210039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.210576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.210626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.211129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.211515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.211565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.212063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.212631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.212681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.213210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.213768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.213818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.214429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.214976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.214992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.215586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.216105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.216127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.216619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.217133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.217146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.217731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.218425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.218476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.218858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.219429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.219479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.219978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.220559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.220609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.994 qpair failed and we were unable to recover it. 00:33:18.994 [2024-07-26 13:44:16.221104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.994 [2024-07-26 13:44:16.221559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.221573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.221933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.222500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.222551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.223081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.223542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.223556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.224036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.224615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.224666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.225054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.225556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.225606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.226099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.226631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.226650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.227137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.227655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.227706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.228238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.228780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.228793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.229272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.229782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.229795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.230171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.230657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.230671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.231014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.231594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.231644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.232141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.232668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.232683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.233173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.233785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.233837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.234439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.235055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.235073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.235574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.236049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.236063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.236539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.237103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.237120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.237693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.238416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.238467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.238963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.239580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.239631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.240136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.240658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.240671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.241185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.241715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.241767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.242395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.242828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.242844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.243456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.243873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.243889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.244400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.244880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.244893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.245485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.246004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.246020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.246496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.247018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.247034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.247628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.248151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.248167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.248604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.249119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.249132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.995 [2024-07-26 13:44:16.249701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.250127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.995 [2024-07-26 13:44:16.250144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.995 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.250618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.251134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.251147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.251575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.252073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.252086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.252504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.253064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.253084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.253559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.253842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.253856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.254225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.254672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.254685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.255165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.255665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.255677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.256182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.256672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.256684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.257185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.257786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.257834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.258430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.258840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.258855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.259181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.259686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.259699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.260177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.260626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.260674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.261177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.261766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.261814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.262434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.263003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.263018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.263616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.264175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.264191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.264786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.265427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.265475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.265819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.266444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.266492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.266980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.267558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.267607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.268098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.268549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.268562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.268932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.269523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.269572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.270106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.270558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.270571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.271050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.271656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.271704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.272198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.272753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.272802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.273405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.273962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.273977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.274567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.275095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.275112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.275606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.276125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.276137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.276577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.277114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.277131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.277653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.278127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.278139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.996 qpair failed and we were unable to recover it. 00:33:18.996 [2024-07-26 13:44:16.278743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.996 [2024-07-26 13:44:16.279399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.279447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.279933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.280420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.280473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.280969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.281573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.281622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.282110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.282589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.282602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.283108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.283576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.283589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.283941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.284515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.284564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.285057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.285613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.285661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.286153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.286728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.286777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.287413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.287945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.287961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.288547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.289109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.289124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.289623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.290134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.290147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.290704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.291217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.291233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.291731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.292159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.292172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.292777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.293389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.293438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.293941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.294542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.294588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.295073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.295668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.295714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.296213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.296756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.296802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.297423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.297935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.297950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.298527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.299081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.299097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.299600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.300071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.300083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.300570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.301079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.301092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.301591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.302057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.302071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.302612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.302999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.303016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.303620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.304172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.304187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.304678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.305190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.305206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.305772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.306177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.306192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.306776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.307377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.997 [2024-07-26 13:44:16.307424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.997 qpair failed and we were unable to recover it. 00:33:18.997 [2024-07-26 13:44:16.307953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.308458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.308505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.309002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.309605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.309651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.310130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.310697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.310743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.311241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.311739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.311752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.312286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.312563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.312584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.313072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.313427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.313441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.313919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.314180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.314205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.314669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.315143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.315155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.315729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.316412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.316459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.316955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.317564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.317611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.318103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.318585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.318598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.318966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.319559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.319606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.320090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.320565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.320577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.321071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.321659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.321705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.322193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.322752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.322798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.323389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.323937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.323954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.324537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.325045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.325059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.325661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.326175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.326189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.326663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.327169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.327185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.327774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.328410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.328454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.328940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.329209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.329221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.329777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.330416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.998 [2024-07-26 13:44:16.330461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.998 qpair failed and we were unable to recover it. 00:33:18.998 [2024-07-26 13:44:16.330950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.331550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.331595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.332082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.332621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.332666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.333150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.333613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.333625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.334127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.334475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.334525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.335021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.335597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.335641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.336125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.336638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.336650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.337112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.337603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.337648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.338146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.338611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.338623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.339100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.339574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.339586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.339919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.340500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.340544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.340922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.341402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.341447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.341951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.342521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.342566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.343040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.343632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.343676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.344142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.344569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.344613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.345099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.345408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.345420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.345899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.346404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.346415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.346889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.347484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.347529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.348012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.348612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.348656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.349139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.349604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.349616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.350122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.350718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.350762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.351409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.351913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.351928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.352503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.353059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.353073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:18.999 [2024-07-26 13:44:16.353567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.354079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.999 [2024-07-26 13:44:16.354091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:18.999 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.354580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.355083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.355094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.355592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.356103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.356116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.356604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.357109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.357121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.357595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.358098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.358110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.358678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.359386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.359430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.359919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.360518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.360562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.360940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.361412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.361424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.361902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.362456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.362500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.363002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.363605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.363649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.364136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.364601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.364613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.365090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.365566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.365578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.366036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.366599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.366644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.367132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.367701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.367746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.368408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.368959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.368974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.369545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.370093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.370108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.370622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.371128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.371139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.371619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.372168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.372183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.372740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.373394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.373439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.373923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.374517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.374562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.375047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.375644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.375690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.376148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.376667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.376712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.377212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.377709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.377721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.378205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.378760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.378804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.379378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.379881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.379896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.380492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.381041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.381056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.381621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.382168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.382183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.000 qpair failed and we were unable to recover it. 00:33:19.000 [2024-07-26 13:44:16.382763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.000 [2024-07-26 13:44:16.383387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.383431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.383915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.384522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.384567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.385068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.385661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.385706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.386187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.386669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.386714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.387195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.387779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.387824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.388395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.388950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.388970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.389542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.390090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.390106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.390531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.390998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.391009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.391596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.392106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.392120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.392613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.393078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.393089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.393500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.394049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.394067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.394538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.395049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.395060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.395580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.396128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.396143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.396633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.397142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.397153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.397707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.398418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.398463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.398938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.399441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.399453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.399933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.400153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.400164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.400651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.401154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.401164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.401722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.402123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.402137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.402689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.403352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.403396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.403880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.404480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.404524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.404864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.405363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.405375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.405868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.406242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.406254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.001 qpair failed and we were unable to recover it. 00:33:19.001 [2024-07-26 13:44:16.406726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.407125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.001 [2024-07-26 13:44:16.407136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.407610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.408114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.408126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.408506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.409007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.409019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.409488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.410005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.410020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.410577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.411080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.411094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.411605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.412115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.412127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.412607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.413111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.413121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.413636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.414187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.414209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.414676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.415180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.415191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.415646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.416078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.416093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.416594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.417100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.417111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.417582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.418006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.418017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.418585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.419109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.419124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.419706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.420414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.420458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.420943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.421534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.421578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.421763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.422272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.422284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.422761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.423140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.423150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.423643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.424150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.424161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.424656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.425161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.425172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.425758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.426393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.426437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.426924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.427413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.427458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.427941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.428506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.428550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.428915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.429394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.429406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.429910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.430368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.430412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.430902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.431410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.431422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.002 qpair failed and we were unable to recover it. 00:33:19.002 [2024-07-26 13:44:16.431894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.002 [2024-07-26 13:44:16.432492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.432536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.433027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.433585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.433630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.434131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.434600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.434612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.435086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.435532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.435543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.436018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.436575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.436619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.437107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.437578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.437590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.438086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.438611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.438624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.439094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.439658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.439704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.440190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.440669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.440685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.441161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.441714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.441759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.442409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.442938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.442953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.443512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.444062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.444078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.444640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.445107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.445119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.445594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.446104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.446117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.446683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.447410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.447455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.447938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.448533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.448578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.449092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.449570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.449582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.450058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.450650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.450694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.451189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.451778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.451827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.452435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.452948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.452964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.453548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.454095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.454110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.454617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.455123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.455134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.003 qpair failed and we were unable to recover it. 00:33:19.003 [2024-07-26 13:44:16.455689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.456421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.003 [2024-07-26 13:44:16.456466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.004 [2024-07-26 13:44:16.456948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.457545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.457590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.004 [2024-07-26 13:44:16.458076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.458503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.458548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.004 [2024-07-26 13:44:16.458940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.459408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.459452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.004 [2024-07-26 13:44:16.459949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.460534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.460578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.004 [2024-07-26 13:44:16.461064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.461644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.004 [2024-07-26 13:44:16.461688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.004 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.462039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.462616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.462660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.463150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.463718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.463763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.464147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.464702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.464747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.465410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.465943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.465958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.466534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.467083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.467098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.467603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.468113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.468124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.468504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.469015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.469026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.469580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.470133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.470147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.470708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.471412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.271 [2024-07-26 13:44:16.471456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.271 qpair failed and we were unable to recover it. 00:33:19.271 [2024-07-26 13:44:16.471937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.472431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.472476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.472855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.473365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.473377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.473883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.474494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.474538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.475024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.475633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.475678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.476057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.476631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.476676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.477177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.477754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.477798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.478394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.478911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.478926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.479500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.479900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.479914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.480498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.481049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.481065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.481655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.482161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.482176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.482576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.483084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.483099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.483593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.483796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.483807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.484303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.484822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.484834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.485186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.485691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.485703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.486042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.486643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.486687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.487038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.487615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.487659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.487917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.488463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.488508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.489006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.489611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.489656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.490146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.490571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.490583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.491062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.491657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.491702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.492018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.492627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.492672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.493170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.493812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.493856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.494446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.494896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.494912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.272 qpair failed and we were unable to recover it. 00:33:19.272 [2024-07-26 13:44:16.495530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.272 [2024-07-26 13:44:16.496035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.496050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.496632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.497181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.497195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.497765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.498410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.498455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.498941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.499539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.499584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.500068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.500638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.500683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.501412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.501959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.501974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.502564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.503117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.503131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.503617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.504084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.504096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.504680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.505072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.505088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.505587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.506094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.506111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.506579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.507092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.507104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.507487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.507996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.508008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.508573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.509080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.509095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.509591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.510099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.510111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.510588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.510937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.510949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.511516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.512067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.512082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.512343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.512800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.512813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.513338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.513836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.513847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.514347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.514699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.514711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.514964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.515451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.515463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.515937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.516401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.516412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.516891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.517401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.517412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.517904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.518500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.518545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.519025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.519581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.519626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.273 qpair failed and we were unable to recover it. 00:33:19.273 [2024-07-26 13:44:16.520109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.273 [2024-07-26 13:44:16.520582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.520594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.521072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.521629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.521673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.522028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.522601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.522645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.523120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.523640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.523686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.524170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.524736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.524780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.525422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.525970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.525985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.526572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.527124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.527140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.527613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.528077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.528089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.528555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.529066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.529078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.529568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.530073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.530084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.530552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.531063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.531075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.531713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.532400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.532445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.532931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.533508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.533552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.534084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.534582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.534595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.535091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.535577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.535589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.536054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.536587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.536632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.537119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.537675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.537720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.538100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.538584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.538597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.539107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.539571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.539583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.540080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.540471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.540482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.540977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.541538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.541583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.542069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.542642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.542686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.543180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.543795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.543839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.544423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.544821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.544835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.545444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.545991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.546005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.546575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.547091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.547106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.274 qpair failed and we were unable to recover it. 00:33:19.274 [2024-07-26 13:44:16.547368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.274 [2024-07-26 13:44:16.547886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.547897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.548368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.548790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.548801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.549292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.549761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.549771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.550247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.550733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.550745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.551204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.551700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.551711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.552207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.552464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.552485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.552974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.553569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.553613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.554099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.554356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.554376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.554893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.555397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.555409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.555747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.556219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.556231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.556681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.557187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.557207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.557673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.558134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.558145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.558617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.559121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.559133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.559597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.560102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.560114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.560665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.561167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.561183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.561685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.562149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.562161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.562724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.563375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.563419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.563954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.564505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.564550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.565035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.565621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.565665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.566147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.566711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.566755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.567411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.567914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.567929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.568397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.568955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.568970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.569565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.570071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.570086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.570585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.571091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.571104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.571467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.571971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.571984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.572571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.573119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.275 [2024-07-26 13:44:16.573134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.275 qpair failed and we were unable to recover it. 00:33:19.275 [2024-07-26 13:44:16.573617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.574131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.574143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.574704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.575405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.575450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.575936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.576216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.576237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.576713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.577419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.577464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.577947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.578178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.578197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.578731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.579413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.579457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.579937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.580532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.580577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.581070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.581643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.581687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.582171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.582765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.582809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.583388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.583893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.583908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.584481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.584999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.585013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.585578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.586127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.586142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.586628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.587092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.587104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.587663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.588167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.588182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.588654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.589116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.589127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.589654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.590209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.590224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.590568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.591026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.591039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.591605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.592154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.592168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.592725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.593398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.593442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.593913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.594494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.594539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.595027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.595547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.276 [2024-07-26 13:44:16.595592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.276 qpair failed and we were unable to recover it. 00:33:19.276 [2024-07-26 13:44:16.596085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.596570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.596581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.597056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.597636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.597682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.598181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.598775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.598820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.599420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.599943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.599957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.600526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.601078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.601093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.601589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.602101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.602113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.602576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.603078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.603090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.603478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.603983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.603994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.604558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.605061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.605076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.605565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.606070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.606082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.606556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.607064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.607076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.607639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.608147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.608163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.608739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.609392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.609436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.609932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.610491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.610536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.610983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.611586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.611635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.611989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.612451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.612495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.612979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.613523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.613568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.614054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.614638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.614682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.615187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.615788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.615832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.616406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.616954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.616968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.617526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.618074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.618089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.618563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.619025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.619037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.619623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.620132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.620146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.620529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.621033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.621049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.621626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.622171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.622190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.622461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.622975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.622988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.277 qpair failed and we were unable to recover it. 00:33:19.277 [2024-07-26 13:44:16.623575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.277 [2024-07-26 13:44:16.624125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.624139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.624690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.625115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.625130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.625503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.625968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.625979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.626614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.627159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.627174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.627732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.628370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.628415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.628898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.629500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.629544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.630031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.630627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.630672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.631153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.631716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.631761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.632388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.632935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.632950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.633563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.634072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.634087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.634593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.635100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.635112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.635609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.636117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.636128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.636589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.637096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.637108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.637656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.638213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.638229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.638726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.639195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.639210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.639758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.640364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.640408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.640667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.641101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.641114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.641544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.642056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.642068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.642627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.643175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.643190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.643725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.644209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.644221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.644792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.645412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.645457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.645940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.646539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.646584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.647068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.647628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.647673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.648171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.648768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.648813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.649389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.649941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.649956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.650430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.650981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.650996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.278 qpair failed and we were unable to recover it. 00:33:19.278 [2024-07-26 13:44:16.651453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.278 [2024-07-26 13:44:16.651968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.651983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.652550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.653101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.653116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.653611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.654125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.654137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.654699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.655416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.655460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.655947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.656544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.656588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.657073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.657641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.657685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.658186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.658778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.658822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.659454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.660003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.660018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.660593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.661102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.661117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.661706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.662121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.662136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.662612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.663076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.663087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.663578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.664087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.664099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.664589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.665046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.665058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.665486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.666003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.666018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.666578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.667127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.667142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.667706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.668230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.668259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.668756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.669106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.669117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.669600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.670109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.670119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.670483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.670994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.671005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.671597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.672084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.672101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.672457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.672946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.672957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.673430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.673781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.673794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.674296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.674559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.674571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.674913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.675394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.675410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.675883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.676396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.676408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.676778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.677108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.677119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.677604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.678107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.678118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.279 qpair failed and we were unable to recover it. 00:33:19.279 [2024-07-26 13:44:16.678612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.279 [2024-07-26 13:44:16.679116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.679128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.679536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.680049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.680061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.680313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.680663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.680676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.681173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.681650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.681661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.682135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.682603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.682615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.683088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.683347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.683362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.683762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.684266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.684277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.684784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.685246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.685258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.685731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.686238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.686250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.686727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.687229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.687240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.687733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.688199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.688216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.688682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.689170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.689181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.689655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.690160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.690171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.690730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.691388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.691431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.691905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.692476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.692520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.693019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.693544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.693587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.694068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.694661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.694705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.695195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.695748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.695791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.696387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.696902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.696917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.697465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.697770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.697792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.698312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.698794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.698805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.699317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.699791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.699802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.700323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.700815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.700826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.701317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.701821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.701831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.702286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.702748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.702758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.703232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.703747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.703758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.280 qpair failed and we were unable to recover it. 00:33:19.280 [2024-07-26 13:44:16.704263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.280 [2024-07-26 13:44:16.704657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.704667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.705155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.705618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.705629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.706105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.706608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.706620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.707142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.707570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.707581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.708054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.708640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.708683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.709174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.709768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.709812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.710390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.710889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.710903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.711471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.712015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.712031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.712612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.713134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.713149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.713721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.714388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.714431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.714796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.715301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.715313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.715794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.716299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.716310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.716789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.717289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.717300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.717807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.718307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.718319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.718792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.719290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.719301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.719635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.720141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.720152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.720653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.721154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.721164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.721622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.722127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.722139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.722497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.722976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.722988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.723551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.723961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.723975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.724489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.725043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.725058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.725230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.725742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.725759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.726230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.726719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.281 [2024-07-26 13:44:16.726731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.281 qpair failed and we were unable to recover it. 00:33:19.281 [2024-07-26 13:44:16.727210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.727534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.727545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.728038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.728607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.728649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.729140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.729839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.729882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.730366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.730910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.730925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.731500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.732049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.732064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.732637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.733142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.733158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.733629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.734179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.734194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.734648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.735158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.735170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.735732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.736410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.736453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.736941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.737394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.737436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.282 [2024-07-26 13:44:16.737803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.738267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.282 [2024-07-26 13:44:16.738279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.282 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.738785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.739292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.739304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.739682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.740193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.740207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.740669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.741179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.741189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.741667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.742064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.742076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.742670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.743169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.743184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.743670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.744215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.744231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.744726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.745412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.745454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.745951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.746553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.746595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.746978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.747545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.747588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.748071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.748678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.748720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.749101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.749664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.749706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.750209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.750788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.750830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.751189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.751785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.751827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.752435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.752941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.752955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.753527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.754068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.754082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.754555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.754954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.754965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.549 [2024-07-26 13:44:16.755526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.756071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.549 [2024-07-26 13:44:16.756087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.549 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.756474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.756989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.757001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.757615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.758083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.758098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.758597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.758864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.758882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.759318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.759796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.759807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.760279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.760779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.760789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.761265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.761773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.761784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.762279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.762709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.762720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.763288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.763779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.763790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.764262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.764749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.764759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.765233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.765709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.765721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.766247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.766675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.766687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.767161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.767608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.767619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.768098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.768583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.768595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.769065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.769541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.769584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.770087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.770581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.770593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.771066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.771634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.771676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.772156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.772759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.772802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.773401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.773945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.773959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.774539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.775087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.775102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.775440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.775775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.775786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.776288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.776633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.776644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.777114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.777555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.777570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.777906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.778379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.778390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.778865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.779320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.779331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.779808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.780267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.780279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.780653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.781157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.781168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.781505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.781854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.550 [2024-07-26 13:44:16.781865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.550 qpair failed and we were unable to recover it. 00:33:19.550 [2024-07-26 13:44:16.782353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.782704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.782715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.783187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.783681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.783692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.784027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.784508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.784521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.784975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.785524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.785566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.786030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.786619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.786660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.787146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.787705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.787746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.788412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.788954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.788968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.789548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.790090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.790104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.790595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.790983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.790993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.791545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.792082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.792094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.792608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.793072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.793081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.793572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.794076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.794085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.794575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.795037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.795048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.795726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.796153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.796169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.796752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.797391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.797432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.797938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.798499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.798540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.798917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.799522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.799564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.800038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.800604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.800645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.801179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.801752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.801794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.802396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.802913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.802928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.803517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.803904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.803919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.804535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.805032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.805048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.805619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.806170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.806185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.806659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.807122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.807133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.807633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.808166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.808181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.808580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.809085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.809097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.809663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.810162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.810177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.551 qpair failed and we were unable to recover it. 00:33:19.551 [2024-07-26 13:44:16.810649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.551 [2024-07-26 13:44:16.810880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.810899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.811471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.812008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.812023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.812509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.813052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.813068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.813610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.814115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.814126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.814598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.815143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.815161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.815655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.816161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.816173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.816596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.817056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.817068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.817633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.818159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.818174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.818737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.819412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.819453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.819939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.820941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.820968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.821526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.821990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.822001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.822563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.823062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.823077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.823547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.824052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.824063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.824630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.825168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.825183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.825748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.826418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.826458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.826940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.827537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.827578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.828071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.828629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.828670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.829146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.829707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.829748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.830409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.830928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.830947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.831520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.832017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.832031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.832543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.833087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.833102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.833598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.834059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.834070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.834621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.834976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.834990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.835569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.836115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.836129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.836607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.837147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.837162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.837647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.838153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.838164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.838725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.839360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.839401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.552 [2024-07-26 13:44:16.839889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.840489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.552 [2024-07-26 13:44:16.840530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.552 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.841030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.841435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.841481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.841962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.842560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.842602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.842944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.843521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.843563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.843953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.844519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.844560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.845082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.845353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.845366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.845877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.846953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.846976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.847440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.847947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.847958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.848527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.848910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.848927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.849409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.849921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.849933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.850495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.850993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.851007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.851575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.852082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.852097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.852585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.853088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.853099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.853592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.854100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.854111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.854571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.855076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.855087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.855579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.856042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.856053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.856525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.857034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.857049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.857537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.858045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.858056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.858531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.859032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.859043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.859426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.859929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.859941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.860500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.861044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.861059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.861643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.862119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.862134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.862614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.863116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.863130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.863515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.864024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.864035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.864598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.865095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.865110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.865711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.866226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.866238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.866733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.867404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.867444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.867944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.868392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.868404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.553 qpair failed and we were unable to recover it. 00:33:19.553 [2024-07-26 13:44:16.868922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.553 [2024-07-26 13:44:16.869405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.869416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.869890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.870403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.870414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.870903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.871377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.871388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.871874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.872459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.872499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.872979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.873559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.873599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.874085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.874568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.874580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.875084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.875581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.875592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.875848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.876323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.876335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.876827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.877322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.877334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.877827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.878286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.878298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.878782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.879291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.879302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.879776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.880403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.880423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.880895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.881397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.881410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.881912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.882282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.882294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.882837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.883301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.883312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.883811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.884276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.884287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.884770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.885271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.885283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.885789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.886297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.886308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.886776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.887126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.887140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.887622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.888083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.888094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.888580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.889078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.889089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.889558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.890061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.890072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.554 qpair failed and we were unable to recover it. 00:33:19.554 [2024-07-26 13:44:16.890630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.554 [2024-07-26 13:44:16.891125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.891139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.891646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.892102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.892114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.892676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.893217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.893239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.893728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.894198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.894219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.894664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.896158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.896182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.896666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.897123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.897134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.897601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.898100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.898111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.898585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.899048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.899059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.899741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.900405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.900445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.900925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.901521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.901561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.902037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.902612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.902653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.903141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.903650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.903690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.904185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.904743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.904784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.905429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.905814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.905828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.906426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.906925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.906939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.907282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.907750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.907761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.908242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.908712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.908723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.909229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.909690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.909701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.910176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.910653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.910665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.911157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.911631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.911642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.912110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.912551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.912562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.913024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.913468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.913479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.913952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.914390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.914401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.914885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.915347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.915359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.915856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.916312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.916323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.916816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.917322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.917333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.917802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.918312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.918324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.555 qpair failed and we were unable to recover it. 00:33:19.555 [2024-07-26 13:44:16.918812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.919167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.555 [2024-07-26 13:44:16.919180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.919648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.920116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.920127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.920586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.921054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.921065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.921608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.922145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.922159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.922631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.923095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.923106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.923577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.924075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.924086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.924488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.924952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.924963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.925525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.926015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.926029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.926611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.927104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.927118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.927595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.928057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.928068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.928613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.929107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.929122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.929699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.930197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.930224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.930715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.931227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.931248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.931490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.931969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.931981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.932482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.932983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.932994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.933800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.934443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.934483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.934859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.935322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.935334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.935836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.936298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.936309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.936791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.937290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.937301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.937781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.938283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.938295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.938785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.939525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.939549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.940014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.940510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.940522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.940992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.941372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.941383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.941737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.942210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.942221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.942702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.943210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.943221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.943593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.943995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.944006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.944617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.945003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.945021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.945597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.946097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.556 [2024-07-26 13:44:16.946111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.556 qpair failed and we were unable to recover it. 00:33:19.556 [2024-07-26 13:44:16.946772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.947440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.947478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.947976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.948516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.948555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.949058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.949612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.949651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.950153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.950521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.950559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.950807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.951288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.951298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.951809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.952263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.952274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.952758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.953215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.953225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.953704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.954020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.954029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.954478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.954891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.954901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.955399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.955866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.955876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.956192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.956656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.956666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.957114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.957671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.957710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.958229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.958719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.958729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.959247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.959745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.959755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.960105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.960365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.960376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.960866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.961370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.961380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.961749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.962247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.962256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.962739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.963060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.963069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.963537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.963868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.963879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.964331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.964808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.964817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.965265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.965747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.965756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.966244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.966697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.966706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.967142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.967489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.967499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.967973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.968468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.968477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.968993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.969580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.969618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.970134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.970653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.970664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.971116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.971649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.971688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.557 qpair failed and we were unable to recover it. 00:33:19.557 [2024-07-26 13:44:16.972164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.972736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.557 [2024-07-26 13:44:16.972775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.973393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.973958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.973971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.974541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.974979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.974992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.975671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.976168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.976181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.976776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.977422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.977461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.977967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.978528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.978566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.978933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.979430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.979468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.979974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.980614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.980652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.981160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.981692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.981730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.982177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.982725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.982763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.983426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.983924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.983937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.984528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.985024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.985037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.985607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.986091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.986105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.986593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.987100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.987110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.987471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.987935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.987945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.988434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.988863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.988876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.989410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.989904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.989917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.990351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.990823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.990833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.991348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.991621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.991630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.992114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.992594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.992604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.993052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.993609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.993647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.994175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.994684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.994695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.995157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.995511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.995553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.995861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.996209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.996220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.996715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.997178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.997188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.997800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.998410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.998448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.998903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.999433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.558 [2024-07-26 13:44:16.999471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.558 qpair failed and we were unable to recover it. 00:33:19.558 [2024-07-26 13:44:16.999998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.000558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.000596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.000992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.001557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.001595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.002120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.002521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.002534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.003031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.003652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.003690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.004190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.004782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.004820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.005414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.005902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.005920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.006422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.006911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.006924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.007478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.007964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.007977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.008521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.009426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.009452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.009930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.010428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.010439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.010885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.011205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.011215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.011685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.012147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.012156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.012641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.013124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.013138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.013616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.013937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.013947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.014416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.014928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.014941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.559 [2024-07-26 13:44:17.015560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.016050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.559 [2024-07-26 13:44:17.016063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.559 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.016653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.017155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.017170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.017759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.018435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.018474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.018877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.019467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.019506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.020015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.020625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.020663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.021168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.021642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.021681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-26 13:44:17.022230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.826 [2024-07-26 13:44:17.022711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.022721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.023214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.023683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.023693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.024029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.024608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.024648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.025131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.025665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.025703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.026182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.026759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.026798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.027422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.027825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.027839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.028485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.028972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.028985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.029395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.029868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.029878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.030424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.030898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.030912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.031302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.031779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.031790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.032255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.032660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.032669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.033141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.033620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.033630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.034123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.034584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.034595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.034851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.035316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.035327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.035781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.036156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.036165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.036676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.037139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.037149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.037659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.038129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.038139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.038749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.039432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.039472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.039900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.040490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.040528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.040910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.041417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.041428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.041875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.042440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.042478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.042989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.043578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.043616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.044127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.044617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.044628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.045140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.045683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.045722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.046417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.046900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.046913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.047482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.048017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.048032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.048686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.049220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.049236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.827 [2024-07-26 13:44:17.049703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.050166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.827 [2024-07-26 13:44:17.050177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.827 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.050749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.051420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.051458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.051963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.052563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.052601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.053090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.053688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.053726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.054424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.054912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.054926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.055493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.055980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.055994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.056559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.057094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.057108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.057605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.058111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.058122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.058482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.058985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.059000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.059542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.060082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.060096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.060601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.061062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.061072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.061541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.062040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.062050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.062612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.063151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.063164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.063722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.064421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.064459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.064927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.065519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.065558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.066065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.066648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.066685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.067185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.067772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.067810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.068385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.068870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.068883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.069420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.069935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.069947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.070518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.071003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.071017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.071615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.071970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.071983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.072566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.072922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.072935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.073525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.074008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.074021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.074590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.075075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.075088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.075543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.076000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.076010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.076541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.077032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.077045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.077584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.078123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.078137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.078717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.079419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.079456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.828 qpair failed and we were unable to recover it. 00:33:19.828 [2024-07-26 13:44:17.079912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.828 [2024-07-26 13:44:17.080460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.080497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.081001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.081431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.081469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.081973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.082517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.082555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.083067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.083615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.083653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.084137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.084693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.084731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.085231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.085708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.085719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.086164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.086622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.086632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.087171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.087626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.087663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.088158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.088649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.088660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.089194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.089731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.089769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.090407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.090796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.090809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.091410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.091910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.091922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.092378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.092834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.092845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.093231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.093676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.093686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.094229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.094692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.094702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.095149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.095613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.095622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.096069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.096628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.096666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.097171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.097701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.097739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.098407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.098873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.098887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.099466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.099953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.099967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.100418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.100878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.100887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.101425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.101964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.101978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.102545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.102818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.102838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.103268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.103748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.103757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.104120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.104583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.104593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.105037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.105587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.105625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.106130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.106594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.106605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.829 qpair failed and we were unable to recover it. 00:33:19.829 [2024-07-26 13:44:17.107054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.107648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.829 [2024-07-26 13:44:17.107686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.108184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.108808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.108845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.109471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.109958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.109972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.110587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.111075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.111088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.111543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.112000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.112013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.112552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.113039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.113051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.113673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.114162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.114175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.114752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.115401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.115438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.115916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.116504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.116542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.117044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.117594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.117631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.118179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.118761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.118799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.119448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.119835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.119849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.120415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.120908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.120921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.121499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.121985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.121998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.122537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.123027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.123041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.123623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.124116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.124128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.124352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.124842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.124853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.125244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.125699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.125708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.126159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.126490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.126501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.126950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.127415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.127425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.127873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.128379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.128416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.128914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.129373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.129383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.129832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.130289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.130299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.130766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.131221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.131232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.131696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.132153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.132162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.132620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.132935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.132945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.133419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.133873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.133883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.134352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.134831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.830 [2024-07-26 13:44:17.134840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.830 qpair failed and we were unable to recover it. 00:33:19.830 [2024-07-26 13:44:17.135291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.135666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.135676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.136122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.136585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.136595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.137046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.137590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.137627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.138127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.138591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.138602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.139055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.139616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.139653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.140151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.140708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.140746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.141212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.141748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.141785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.142393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.142868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.142881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.143457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.143943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.143956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.144497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.144981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.144994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.145572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.146063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.146076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.146579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.146928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.146939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.147517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.148068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.148081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.148538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.148996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.149005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.149484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.149975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.149988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.150481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.150979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.150993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.151559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.152045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.152057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.152608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.153114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.153127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.153686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.154172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.154185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.154659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.154875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.154890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.155458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.155941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.155954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.831 [2024-07-26 13:44:17.156527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.157012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.831 [2024-07-26 13:44:17.157025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.831 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.157599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.158084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.158097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.158583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.159041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.159050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.159594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.160082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.160095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.160519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.160889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.160899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.161467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.161862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.161875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.162312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.162774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.162790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.163243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.163705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.163715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.163889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.164406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.164417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.164885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.165343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.165353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.165821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.166197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.166211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.166667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.167121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.167130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.167655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.168110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.168119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.168668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.169135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.169144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.169491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.169990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.170003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.170556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.171044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.171057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.171612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.172098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.172112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.172677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.173160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.173173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.173733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.174192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.174208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.174744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.175417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.175454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.175954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.176505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.176542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.177047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.177595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.177632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.178211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.178746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.178782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.179394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.179879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.179892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.180451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.180927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.180941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.181501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.181957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.181970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.182453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.182936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.182949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.183502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.183988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.184001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.832 qpair failed and we were unable to recover it. 00:33:19.832 [2024-07-26 13:44:17.184666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.832 [2024-07-26 13:44:17.185176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.185189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.185580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.186122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.186132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.186609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.187099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.187112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.187587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.188041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.188051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.188601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.189086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.189099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.189597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.190083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.190093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.190566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.190936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.190945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.191454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.191943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.191957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.192512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.192999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.193012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.193588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.194081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.194094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.194453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.194912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.194922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.195510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.195978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.195991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.196606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.197089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.197102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.197540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.198003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.198012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.198552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.199038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.199050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.199590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.200077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.200090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.200598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.201060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.201070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.201623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.202108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.202121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.202606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.203089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.203103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.203578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.204033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.204044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.204556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.205034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.205047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.205597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.206078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.206091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.206588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.207090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.207100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.207593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.208051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.208061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 [2024-07-26 13:44:17.208585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.209075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.209089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1189524 Killed "${NVMF_APP[@]}" "$@" 00:33:19.833 [2024-07-26 13:44:17.209576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 13:44:17 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:19.833 [2024-07-26 13:44:17.210035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 [2024-07-26 13:44:17.210045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.833 qpair failed and we were unable to recover it. 00:33:19.833 13:44:17 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:19.833 13:44:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:19.833 [2024-07-26 13:44:17.210591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.833 13:44:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:19.833 13:44:17 -- common/autotest_common.sh@10 -- # set +x 00:33:19.833 [2024-07-26 13:44:17.211078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.211092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.211667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.212069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.212083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.212597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.213063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.213074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.213637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.214120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.214133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.214746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.215414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.215451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.215940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.216440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.216477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.216983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.217646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.217682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 13:44:17 -- nvmf/common.sh@469 -- # nvmfpid=1190411 00:33:19.834 [2024-07-26 13:44:17.218184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 13:44:17 -- nvmf/common.sh@470 -- # waitforlisten 1190411 00:33:19.834 13:44:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:19.834 13:44:17 -- common/autotest_common.sh@819 -- # '[' -z 1190411 ']' 00:33:19.834 [2024-07-26 13:44:17.218767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.218804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 13:44:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.834 13:44:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:19.834 [2024-07-26 13:44:17.219410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 13:44:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.834 13:44:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:19.834 13:44:17 -- common/autotest_common.sh@10 -- # set +x 00:33:19.834 [2024-07-26 13:44:17.219897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.219910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.220364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.220841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.220851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.221406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.221841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.221854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.222310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.222794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.222805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.222999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.223463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.223474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.223656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.224124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.224134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.224614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.225125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.225135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.225402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.225742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.225752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.226121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.226589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.226600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.226977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.227440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.227450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.227947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.228527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.228565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.229071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.229519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.229556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.230039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.230534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.230576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.230985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.231537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.231575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.232107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.232576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.232587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.232973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.233533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.233570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.234084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.234428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.234440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.834 qpair failed and we were unable to recover it. 00:33:19.834 [2024-07-26 13:44:17.234829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.235297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.834 [2024-07-26 13:44:17.235307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.235755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.236103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.236113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.236650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.237023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.237033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.237509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.237920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.237933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.238409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.238874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.238883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.239503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.240080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.240093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.240651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.241195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.241209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.241489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.241956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.241965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.242525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.243029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.243042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.243288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.243768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.243778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.244369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.244869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.244884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.245268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.245634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.245644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.246021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.246533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.246543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.247031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.247491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.247527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.247890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.248441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.248478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.248999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.249582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.249619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.249877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.250184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.250195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.250387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.250879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.250890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.251369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.251741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.251751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.252219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.252731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.252741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.253112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.253636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.253646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.254148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.254602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.254613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.254925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.255437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.255447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.255901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.256503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.256540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.256916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.257303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.257314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.257811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.258152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.258163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.258626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.259102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.259111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.259579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.260043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.835 [2024-07-26 13:44:17.260053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.835 qpair failed and we were unable to recover it. 00:33:19.835 [2024-07-26 13:44:17.260614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.260991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.261004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.261593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.261988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.262001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.262586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.263080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.263094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.263494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.263982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.263992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.264549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.265063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.265075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.265480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.265964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.265974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.266542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.267075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.267089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.267487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.267975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.267985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.268586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.268955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.268968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.269558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.270060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.270073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.270594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.271067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.271077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.271444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.271715] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:19.836 [2024-07-26 13:44:17.271759] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.836 [2024-07-26 13:44:17.271972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.271985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.272585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.273133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.273148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.273680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.274137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.274152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.274630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.275026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.275037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.275611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.275995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.276010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.276629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.277170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.277185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.277610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.277911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.277936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.278433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.278948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.278960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.279214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.279749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.279761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.280235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.280514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.280524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.836 qpair failed and we were unable to recover it. 00:33:19.836 [2024-07-26 13:44:17.280977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.836 [2024-07-26 13:44:17.281439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.281449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.281790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.282028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.282039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.282533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.283004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.283015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.283574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.283886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.283901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.284373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.284882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.284893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.285509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.286066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.286081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.286558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.287071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.287083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.287570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.287923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.287935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.288169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.288639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.288650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.288992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.289521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.289559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.290049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.290481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.290519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.290982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.291433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.291472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.291950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.292559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.292597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.293086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.293614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.837 [2024-07-26 13:44:17.293626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:19.837 qpair failed and we were unable to recover it. 00:33:19.837 [2024-07-26 13:44:17.294108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.294580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.294593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.295071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.295668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.295708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.296229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.296736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.296747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.297421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.297681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.297702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.298188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.298685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.298697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.299211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.299682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.299693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.300161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.300478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.300517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.301016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.301606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.301644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.302131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.302578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.302616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.303104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.303574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.303585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.303931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.104 [2024-07-26 13:44:17.304407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.304446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.304926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.305421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.305460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.305949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.306473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.306510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.307088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.307549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.307561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.308075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.308581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.308593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.309070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.309649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.309687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.310164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.310611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.310650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.311138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.311726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.311764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.312127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.312596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.312608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.104 [2024-07-26 13:44:17.313075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.313647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.104 [2024-07-26 13:44:17.313686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.104 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.314068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.314641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.314678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.315154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.315737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.315776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.316393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.316776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.316791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.317437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.317986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.318001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.318489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.319043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.319056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.319646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.320145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.320160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.320772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.321171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.321186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.321793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.322408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.322446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.322949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.323530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.323569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.324073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.324610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.324649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.325180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.325791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.325830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.326465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.326975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.326989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.327595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.327977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.327991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.328506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.329024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.329042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.329632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.330024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.330037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.330626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.330881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.330901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.331389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.331885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.331896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.332419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.332919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.332933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.333546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.334062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.334075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.334567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.335046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.335057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.335435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.335730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.335745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.336219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.336680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.336693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.337170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.337681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.337692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.338171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.338449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.338465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.338945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.339182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.339194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.339762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.339993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.340003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.105 qpair failed and we were unable to recover it. 00:33:20.105 [2024-07-26 13:44:17.340555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.341109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.105 [2024-07-26 13:44:17.341123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.341697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.342344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.342382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.342902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.343479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.343518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.344003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.344464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.344503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.344783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.345274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.345286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.345767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.346011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.346022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.346525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.346773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.346784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.347295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.347788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.347798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.348339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.348583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.348593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.349124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.349520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.349531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.350009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.350364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.350376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.350773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.351192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.351208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.351688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.352038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.352049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.352617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.353104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.353119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.353603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.354118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.354130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.354570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.355121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.355136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.355628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.355979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.355989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.356442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.106 [2024-07-26 13:44:17.356674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.357150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.357165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.357640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.358153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.358167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.358780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.359435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.359473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.359969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.360531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.360570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.360904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.361486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.361524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.362003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.362616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.362655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.363007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.363466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.363505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.364034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.364535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.364573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.365126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.365574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.365613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.365994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.366568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.366606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.106 [2024-07-26 13:44:17.367094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.367494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.106 [2024-07-26 13:44:17.367506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.106 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.368001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.368569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.368607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.369088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.369584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.369596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.370078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.370665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.370704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.371186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.371756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.371795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.372427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.372963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.372977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.373567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.374043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.374057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.374644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.375052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.375068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.375571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.376025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.376036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.376532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.377069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.377084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.377585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.378089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.378100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.378590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.379068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.379080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.379556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.380064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.380075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.380640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.381034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.381049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.381625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.382122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.382137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.382697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.383189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.383211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.383667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.384162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.384173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.384731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.385139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.385154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.385734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.385900] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:20.107 [2024-07-26 13:44:17.386016] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.107 [2024-07-26 13:44:17.386027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.107 [2024-07-26 13:44:17.386035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.107 [2024-07-26 13:44:17.386174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:20.107 [2024-07-26 13:44:17.386297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:20.107 [2024-07-26 13:44:17.386458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.386494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.386611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:20.107 [2024-07-26 13:44:17.386611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:20.107 [2024-07-26 13:44:17.387003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.387581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.387620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.388047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.388634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.388672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.389159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.389733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.389771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.390414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.390957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.390971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.391461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.391733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.391755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.392135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.392605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.392616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.393113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.393465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.107 [2024-07-26 13:44:17.393475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.107 qpair failed and we were unable to recover it. 00:33:20.107 [2024-07-26 13:44:17.393946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.394545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.394583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.395152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.395524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.395536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.396009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.396405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.396443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.396995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.397439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.397477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.397720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.398184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.398195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.398597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.399097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.399108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.399232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.399565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.399576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.399980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.400387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.400399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.400826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.401286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.401298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.401689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.402043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.402053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.402290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.402753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.402764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.403257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.403731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.403742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.404044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.404516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.404527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.404891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.405206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.405223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.405677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.405999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.406010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.406573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.407074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.407088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.407583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.408098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.408111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.408354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.408723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.408735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.409226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.409639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.409650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.410008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.410366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.410377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.410849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.411351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.411362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.411829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.412287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.412298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.412795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.413299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.413310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.413606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.413967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.413977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.414456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.414915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.414927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.415264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.415735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.415746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.416222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.416547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.108 [2024-07-26 13:44:17.416558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.108 qpair failed and we were unable to recover it. 00:33:20.108 [2024-07-26 13:44:17.417029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.417333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.417345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.417670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.417927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.417938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.418204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.418686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.418698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.419028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.419612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.419651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.419915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.420404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.420416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.420713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.421210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.421221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.421711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.421974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.421986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.422461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.422968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.422979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.423559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.424092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.424106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.424595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.425106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.425118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.425676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.426211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.426226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.426571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.427084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.427094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.427676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.428226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.428242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.428736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.429245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.429256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.429620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.430091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.430102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.430577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.431537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.431812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.431956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.432289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.432790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.432800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.433302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.433794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.433805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.434145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.434628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.434639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.435123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.435357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.435368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.435859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.436134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.436145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.109 qpair failed and we were unable to recover it. 00:33:20.109 [2024-07-26 13:44:17.436624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.109 [2024-07-26 13:44:17.437130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.437141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.437605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.438105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.438116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.438578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.438921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.438932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.439510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.440054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.440069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.440578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.441096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.441106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.441351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.441715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.441727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.442205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.442689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.442700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.443173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.443710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.443722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.444192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.444635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.444675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.445023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.445459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.445497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.445978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.446576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.446615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.447097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.447583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.447595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.447937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.448396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.448435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.448933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.449528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.449566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.450070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.450450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.450496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.451058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.451425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.451463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.451941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.452541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.452580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.453083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.453617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.453629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.454093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.454553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.454564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.454984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.455557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.455595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.455846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.456369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.456381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.456880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.457389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.457401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.457875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.458483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.458521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.459001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.459580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.459619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.459998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.460553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.460591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.461098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.461447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.461458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.461821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.462212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.462224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.110 qpair failed and we were unable to recover it. 00:33:20.110 [2024-07-26 13:44:17.462713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.110 [2024-07-26 13:44:17.462852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.462862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.463343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.463826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.463838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.464334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.464843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.464854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.465327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.465796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.465808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.466283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.466528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.466538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.466774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.467232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.467243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.467750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.468213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.468224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.468566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.468836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.468846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.469339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.469844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.469854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.470329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.470839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.470850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.471347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.471810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.471820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.472078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.472578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.472589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.473061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.473658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.473697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.474194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.474774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.474814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.475166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.475500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.475539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.476035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.476465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.476504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.477015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.477594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.477632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.478019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.478177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.478188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.478645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.479193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.479214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.479600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.480005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.480016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.480592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.481137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.481152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.481728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.482096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.482110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.482364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.482844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.482855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.483196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.483698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.483709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.484414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.484913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.484927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.485532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.486083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.486097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.486465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.486957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.486968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.487525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.488020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.111 [2024-07-26 13:44:17.488035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.111 qpair failed and we were unable to recover it. 00:33:20.111 [2024-07-26 13:44:17.488487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.489019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.489034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.489604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.490100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.490114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.490599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.491118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.491131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.491563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.492109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.492124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.492614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.493128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.493140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.493696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.494207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.494219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.494843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.495484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.495523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.496050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.496543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.496582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.496731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.497227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.497240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.497630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.498139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.498149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.498630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.498766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.498781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.499130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.499364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.499376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.499635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.499982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.499992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.500488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.500991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.501003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.501271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.501752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.501763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.502123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.502566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.502577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.503049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.503647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.503685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.504413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.504909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.504923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.505435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.505815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.505830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.506327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.506562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.506572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.507045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.507309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.507331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.507693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.508206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.508217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.508718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.508974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.508989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.509360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.509868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.509879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.510164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.510638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.510649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.511120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.511510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.511520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.511765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.512277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.512288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.112 qpair failed and we were unable to recover it. 00:33:20.112 [2024-07-26 13:44:17.512771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.112 [2024-07-26 13:44:17.513232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.513243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.513725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.514240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.514251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.514590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.515097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.515108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.515445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.515950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.515960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.516262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.516738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.516749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.517085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.517463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.517475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.517945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.518172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.518182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.518514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.518859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.518871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.519349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.519853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.519864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.520339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.520835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.520848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.521186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.521450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.521461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.521956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.522558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.522598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.523084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.523563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.523576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.524048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.524623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.524661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.525136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.525698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.525737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.526412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.526952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.526966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.527533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.528028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.528043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.528472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.528966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.528980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.529580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.530086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.530101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.530580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.531086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.531097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.531572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.532079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.532091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.532591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.532835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.532845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.533318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.533799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.533810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.113 [2024-07-26 13:44:17.534306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.534788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.113 [2024-07-26 13:44:17.534798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.113 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.535270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.535542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.535553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.536026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.536370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.536381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.536858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.537360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.537371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.537866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.538329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.538341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.538814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.539158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.539168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.539534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.539986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.539996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.540564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.541058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.541071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.541567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.541940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.541951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.542614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.543133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.543147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.543620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.544081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.544092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.544659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.545177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.545189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.545771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.546418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.546456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.546948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.547547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.547585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.548067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.548629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.548668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.549147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.549593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.549632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.550167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.550695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.550733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.551114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.551339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.551351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.551565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.552025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.552036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.552511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.553005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.553015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.553574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.554120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.554135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.554624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.555130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.555146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.555514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.555794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.555808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.556073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.556600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.556612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.557078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.557641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.557680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.558160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.558516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.558554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.559059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.559641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.559680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.560162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.560735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.114 [2024-07-26 13:44:17.560774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.114 qpair failed and we were unable to recover it. 00:33:20.114 [2024-07-26 13:44:17.561158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.561719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.561757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.562421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.562964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.562978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.563485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.563891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.563905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.564129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.564653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.564665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.565035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.565295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.565314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.565792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.566309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.566320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.566849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.567315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.567327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.567819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.568285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.568296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.568399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.568734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.568744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.569225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.569729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.569740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.570074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.570543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.115 [2024-07-26 13:44:17.570554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.115 qpair failed and we were unable to recover it. 00:33:20.115 [2024-07-26 13:44:17.571026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.571520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.571558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.572052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.572626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.572664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.573146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.573758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.573796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.574160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.574693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.574731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.575222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.575747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.575759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.576225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.576741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.576751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.381 [2024-07-26 13:44:17.576985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.577424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.381 [2024-07-26 13:44:17.577463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.381 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.577951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.578545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.578583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.579063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.579647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.579685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.580182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.580762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.580800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.581389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.581930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.581944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.582576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.582869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.582883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.583400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.583867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.583878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.584469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.584967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.584981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.585569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.585955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.585970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.586413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.586905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.586919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.587417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.587932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.587943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.588490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.588853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.588867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.589086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.589583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.589594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.590067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.590661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.590699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.591074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.591667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.591706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.592086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.592572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.592583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.593055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.593492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.593530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.594009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.594614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.594652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.595127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.595701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.595739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.596418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.596958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.596971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.597422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.597917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.597931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.598443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.598789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.598803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.599296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.599783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.599794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.600296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.600785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.600796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.601340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.601564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.601574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.602053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.602523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.602534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.603025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.603509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.603547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.604035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.604507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.382 [2024-07-26 13:44:17.604549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.382 qpair failed and we were unable to recover it. 00:33:20.382 [2024-07-26 13:44:17.605028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.605606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.605645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.606131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.606709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.606747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.607424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.607719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.607739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.608212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.608694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.608705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.609180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.609531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.609569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.610078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.610561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.610573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.611071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.611621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.611659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.612157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.612606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.612644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.613125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.613589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.613601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.614066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.614517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.614556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.615040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.615635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.615674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.616166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.616798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.616837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.617435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.617941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.617955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.618194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.618778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.618816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.619386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.619685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.619699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.620152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.620537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.620548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.620884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.621388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.621399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.621871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.622141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.622153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.622616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.623075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.623086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.623578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.624082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.624093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.624592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.625094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.625106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.625592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.626050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.626061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.626529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.626776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.626790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.627272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.627614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.627626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.628097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.628563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.628575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.629052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.629440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.629451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.629923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.630521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.383 [2024-07-26 13:44:17.630560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.383 qpair failed and we were unable to recover it. 00:33:20.383 [2024-07-26 13:44:17.631057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.631418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.631456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.631955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.632510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.632548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.632967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.633519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.633557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.634038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.634606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.634644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.635142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.635610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.635648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.635990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.636570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.636608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.637088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.637577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.637588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.638061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.638631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.638669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.639159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.639759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.639797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.640385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.640883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.640897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.641447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.641970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.641984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.642441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.642939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.642953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.643235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.643742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.643753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.644232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.644383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.644394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.644861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.645208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.645219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.645566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.646026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.646036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.646509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.646775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.646786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.647079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.647546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.647558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.648029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.648488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.648499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.648969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.649514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.649552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.650046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.650629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.650667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.651124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.651497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.651536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.652005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.652607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.652645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.652881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.653362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.653378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.653515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.653715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.653725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.654191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.654667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.654679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.655146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.655651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.384 [2024-07-26 13:44:17.655663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.384 qpair failed and we were unable to recover it. 00:33:20.384 [2024-07-26 13:44:17.655920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.656419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.656431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.656925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.657161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.657171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.657646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.657913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.657924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.658154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.658655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.658665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.659143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.659580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.659618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.660110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.660505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.660516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.660987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.661582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.661620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.661873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.662346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.662358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.662836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.663341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.663353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.663845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.664344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.664355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.664829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.665090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.665105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.665583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.666056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.666066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.666632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.667175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.667190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.667711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.668212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.668223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.668711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.669223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.669243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.669716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.670178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.670188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.670664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.671170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.671183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.671662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.672175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.672186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.672661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.672924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.672943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.673566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.674104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.674118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.674573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.675079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.675090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.675429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.675939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.675950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.676517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.677063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.677078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.677589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.677835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.677847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.678082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.678341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.678352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.678797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.679310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.679320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.385 qpair failed and we were unable to recover it. 00:33:20.385 [2024-07-26 13:44:17.679588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.680054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.385 [2024-07-26 13:44:17.680065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.680540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.681043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.681054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.681426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.681935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.681949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.682211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.682430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.682440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.682925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.683406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.683445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.683823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.684340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.684352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.684832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.685335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.685346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.685623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.686093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.686103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.686598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.687105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.687115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.687656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.688166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.688176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.688635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.689144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.689154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.689493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.689779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.689792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.690263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.690768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.690779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.691156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.691660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.691671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.692148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.692540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.692550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.693049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.693636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.693675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.694054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.694630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.694668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.695148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.695718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.695756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.696412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.696635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.696649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.696917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.697371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.697382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.697862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.698227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.698239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.386 qpair failed and we were unable to recover it. 00:33:20.386 [2024-07-26 13:44:17.698591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.698945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.386 [2024-07-26 13:44:17.698961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.699433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.699941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.699952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.700433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.700658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.700668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.700990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.701468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.701479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.701951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.702457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.702468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.702960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.703211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.703222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.703697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.704206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.704217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.704780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.705161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.705177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.705766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.706042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.706056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.706458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.706962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.706976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.707563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.708104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.708122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.708679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.709419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.709457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.709688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.710015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.710026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.710501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.711011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.711022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.711667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.711974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.711989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.712557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.712834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.712848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.713326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.713789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.713800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.714271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.714776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.714787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.715314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.715789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.715799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.716274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.716776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.716786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.717198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.717573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.717584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.718066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.718593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.718632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.719126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.719597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.719609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.719949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.720433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.720472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.720954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.721508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.721546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.722030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.722616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.722654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.723156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.723800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.387 [2024-07-26 13:44:17.723840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.387 qpair failed and we were unable to recover it. 00:33:20.387 [2024-07-26 13:44:17.724402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.724780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.724793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.725426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.725935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.725949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.726430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.726941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.726952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.727547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.728090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.728104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.728604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.728854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.728871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.729399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.729887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.729897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.730445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.730738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.730758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.731235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.731699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.731710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.732260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.732691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.732701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.733177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.733688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.733699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.734037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.734609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.734648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.735157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.735514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.735526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.736001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.736552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.736590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.736773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.737162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.737174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.737655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.738176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.738188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.738678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.739137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.739148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.739481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.740022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.740037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.740672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.741228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.741252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.741746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.741993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.742003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.742391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.742744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.742755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.743092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.743493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.743505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.743978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.744329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.744339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.744677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.744809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.744819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.745289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.745674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.745685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.746164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.746655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.746666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.747046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.747518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.747529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.748002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.748467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.748506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.388 qpair failed and we were unable to recover it. 00:33:20.388 [2024-07-26 13:44:17.748886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.749496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.388 [2024-07-26 13:44:17.749534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.750018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.750622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.750661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.750894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.751144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.751156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.751631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.752105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.752116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.752598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.752869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.752880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.753353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.753677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.753688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.754005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.754490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.754500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.754995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.755595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.755638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.756141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.756658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.756670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.757006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.757603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.757641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.757986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.758461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.758499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.758983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.759584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.759622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.760126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.760687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.760725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.761211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.761692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.761703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.762181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.762748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.762787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.763167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.763739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.763776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.764010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.764576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.764614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.765094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.765575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.765587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.766073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.766615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.766653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.766884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.767493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.767531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.767765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.768037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.768048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.768530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.768953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.768964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.769519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.770033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.770047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.770630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.770894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.770908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.771404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.771862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.771873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.772471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.773019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.773032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.773411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.773948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.389 [2024-07-26 13:44:17.773962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.389 qpair failed and we were unable to recover it. 00:33:20.389 [2024-07-26 13:44:17.774208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.774739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.774749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.775410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.775944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.775958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.776567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.777111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.777125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.777499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.778037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.778053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.778483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.778974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.778988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.779579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.780072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.780086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.780436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.780781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.780793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.781305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.781804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.781814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.782293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.782757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.782768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.783266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.783773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.783783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.784261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.784728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.784738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.785292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.785417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.785426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.785909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.786386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.786397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.786635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.786878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.786889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.787442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.787902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.787912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.788380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.788889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.788899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.789364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.789605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.789616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.790099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.790559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.790570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.790830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.791302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.791313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.791794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.792298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.792309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.792688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.793148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.793159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.793624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.794136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.794147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.794618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.795126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.795137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.795560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.796062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.796072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.390 qpair failed and we were unable to recover it. 00:33:20.390 [2024-07-26 13:44:17.796712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.390 [2024-07-26 13:44:17.797211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.797226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.797792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.798151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.798165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.798622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.799164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.799178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.799784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.800446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.800484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.800963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.801574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.801611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.802059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.802542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.802580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.802940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.803187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.803197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.803693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.803936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.803951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.804525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.804915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.804929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.805173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.805656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.805667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.806184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.806754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.806792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.807297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.807785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.807796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.808388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.808901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.808915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.809505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.810000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.810014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.810634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.811126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.811139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.811705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.812208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.812223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.812565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.812919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.812930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.813168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.813760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.813798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.814445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.814986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.814999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.815565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.815840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.391 [2024-07-26 13:44:17.815854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.391 qpair failed and we were unable to recover it. 00:33:20.391 [2024-07-26 13:44:17.816119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.816585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.816597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.816936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.817401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.817412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.817874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.818411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.818450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.818929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.819434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.819446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.819683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.820155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.820166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.820662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.821121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.821131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.821608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.822117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.822128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.822473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.822730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.822747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.823088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.823565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.823576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.824084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.824566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.824577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.824824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.825158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.825169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.825648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.826158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.826168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.826651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.827111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.827121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.827596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.828104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.828115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.828678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.829218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.829234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.829568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.830033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.830043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.830517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.831023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.831034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.831392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.831903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.831917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.832483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.833025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.833039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.833395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.833682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.833696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.834176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.834664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.834675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.835022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.835575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.835613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.836099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.836555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.836567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.837038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.837631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.837669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.837907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.838485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.838523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.838783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.839249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.839260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.392 qpair failed and we were unable to recover it. 00:33:20.392 [2024-07-26 13:44:17.839813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.840280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.392 [2024-07-26 13:44:17.840291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.840768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.841071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.841081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.841316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.841831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.841841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.842297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.842782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.842792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.843050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.843559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.843571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.843857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.844321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.844332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.844833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.845339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.845350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.845843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.846189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.846212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.846671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.847135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.847145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.847611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.847959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.847969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.393 [2024-07-26 13:44:17.848533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.849073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.393 [2024-07-26 13:44:17.849087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.393 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.849582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.849925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.849935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.850513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.851055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.851073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.851217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.851658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.851669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.852008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.852516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.852527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.852905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.853133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.853144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.853395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.853829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.853840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.854316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.854814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.854824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.855295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.855559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.855570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.660 qpair failed and we were unable to recover it. 00:33:20.660 [2024-07-26 13:44:17.856056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.660 [2024-07-26 13:44:17.856556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.856566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.857044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.857601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.857640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.858124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.858620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.858631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.859106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.859481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.859524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.860026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.860622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.860661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.861034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.861630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.861668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.861927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.862490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.862528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.863010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.863569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.863606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.864108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.864357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.864369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.864870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.865218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.865229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.865693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.866149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.866160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.866656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.867116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.867126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.867600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.868101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.868111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.868578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.869083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.869094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.869576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.870035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.870046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.870633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.871131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.871145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.871700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.872108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.872122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.872511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.873019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.873030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.873609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.874124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.874138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.874778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.875430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.875468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.875962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.876533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.876571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.877052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.877418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.877456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.877737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.878247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.878258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.878482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.878949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.878959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.879226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.879677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.879687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.880023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.880553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.880564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.881034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.881527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.881565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.661 qpair failed and we were unable to recover it. 00:33:20.661 [2024-07-26 13:44:17.881828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.661 [2024-07-26 13:44:17.882293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.882305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.882656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.883154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.883165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.883643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.884157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.884168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.884743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.885411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.885449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.885927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.886414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.886452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.886687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.887161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.887172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.887561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.888062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.888073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.888520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.889022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.889036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.889605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.890139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.890153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.890710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.891381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.891419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.892509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.892548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.892928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.893440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.893452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.893926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.894492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.894530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.894996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.895480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.895518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.895842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.896322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.896334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.896666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.897013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.897023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.897271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.897727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.897737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.898241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.898743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.898754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.899216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.899609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.899619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.899852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.900150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.900160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.900649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.901153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.901165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.901673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.901947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.901958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.902432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.902891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.902902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.903469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.903976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.903990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.904579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.904864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.904879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.905385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.905889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.905900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.906382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.906858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.906869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.662 qpair failed and we were unable to recover it. 00:33:20.662 [2024-07-26 13:44:17.907339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.662 [2024-07-26 13:44:17.907798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.907813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.908284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.908744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.908755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.909249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.909492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.909503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.909981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.910248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.910260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.910732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.911078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.911089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.911467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.911934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.911945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.912438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.912942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.912952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.913418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.913880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.913890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.914463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.914953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.914966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.915534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.915878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.915892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.916105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.916563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.916575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.917052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.917549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.917561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.917794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.918270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.918282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.918766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.919266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.919277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.919614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.919880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.919890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.920362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.920621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.920639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.920967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.921316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.921328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.921726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.922235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.922245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.922720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.923220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.923231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.923703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.924160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.924170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.924391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.924863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.924874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.925349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.925810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.925821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.926327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.926844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.926854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.927188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.927688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.927699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.928043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.928525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.928564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.663 [2024-07-26 13:44:17.928766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.929276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.663 [2024-07-26 13:44:17.929288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.663 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.929822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.930043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.930054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.930527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.931030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.931041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.931606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.932144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.932158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.932718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.932996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.933010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.933606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.933883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.933897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.934287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.934802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.934813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.935045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.935376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.935387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.935849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.936118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.936128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.936477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.936745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.936756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.937238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.937483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.937496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.937742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.938237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.938248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.938736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.939198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.939219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.939678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.940150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.940161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.940632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.941138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.941149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.941636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.941983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.941994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.942565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.943112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.943126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.943634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.944102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.944112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.944673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.945069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.945084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.945581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.946046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.946056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.946641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.947154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.947169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.947770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.948406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.948444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.948905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.949500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.949538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.950019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.950578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.950616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.951096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.951590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.951602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.952095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.952565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.952577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.952812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.953282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.953298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.664 qpair failed and we were unable to recover it. 00:33:20.664 [2024-07-26 13:44:17.953773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.954119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.664 [2024-07-26 13:44:17.954130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.954607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.955107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.955118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.955585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.956087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.956098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.956593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.957014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.957026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.957151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.957622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.957633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.957889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.958397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.958408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.958902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.959505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.959543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.960026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.960544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.960581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.961060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.961655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.961693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.962174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.962756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.962795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.963388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.963897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.963911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.964478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.964865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.964879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.965481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.965975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.965989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.966564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.967106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.967120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.967599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.968111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.968123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.968611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.968856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.968867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.969337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.969462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.969473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.969951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.970410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.970420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.970915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.971374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.971385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.971847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.972306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.972317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.972799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.973301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.973313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.973792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.974299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.974310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.974552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.975059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.975070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.665 qpair failed and we were unable to recover it. 00:33:20.665 [2024-07-26 13:44:17.975573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.976078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.665 [2024-07-26 13:44:17.976089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.976578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.976925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.976938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.977477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.978020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.978035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.978601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.979141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.979155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.979286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.979795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.979806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.980281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.980740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.980750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.981278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.981780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.981791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.982286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.982780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.982791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.983262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.983767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.983778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.984010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.984243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.984255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.984793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.985208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.985219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.985692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.986199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.986215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.986677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.987182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.987193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.987545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.988087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.988102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.988589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.989097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.989108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.989582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.990092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.990103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.990678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.991227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.991252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.991747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.992299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.992311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.992777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.993239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.993250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.993590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.994094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.994105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.994587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.995091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.995102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.995439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.995946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.995957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.996428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.996930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.996941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.997528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.998020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.998035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.998605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.999144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:17.999158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:17.999719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.000176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.000188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:18.000737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.001413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.001452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.666 [2024-07-26 13:44:18.001907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.002394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.666 [2024-07-26 13:44:18.002437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.666 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.002927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.003388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.003426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.003784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.004136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.004146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.004628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.005104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.005115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.005587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.006045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.006055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.006635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.007138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.007153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.007635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.008172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.008186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.008659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.008983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.008993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.009583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.009861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.009875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.010469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.010971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.010985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.011590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.012131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.012149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.012637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.013145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.013155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.013722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.014388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.014427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.014911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.015534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.015572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.016050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.016644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.016682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.017166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.017527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.017565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.017947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.018518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.018556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.019083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.019566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.019578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.020053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.020642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.020680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.020830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.021304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.021316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.021676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.022065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.022076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.022341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.022689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.022699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.023191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.023685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.023696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.024035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.024386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.024397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.024746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.025238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.025250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.025385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.025715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.025725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.026205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.026662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.026673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.667 [2024-07-26 13:44:18.027009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.027210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.667 [2024-07-26 13:44:18.027221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.667 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.027711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.028149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.028159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.028725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.029422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.029461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.029945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.030513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.030552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.031041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.031458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.031496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.031997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.032554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.032593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.033095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.033442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.033454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.033988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 13:44:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:20.668 [2024-07-26 13:44:18.034564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.034603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 13:44:18 -- common/autotest_common.sh@852 -- # return 0 00:33:20.668 13:44:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:20.668 [2024-07-26 13:44:18.035088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 13:44:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:20.668 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.668 [2024-07-26 13:44:18.035575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.035587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.036084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.036534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.036544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.036801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.037157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.037166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.037714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.038088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.038097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.038352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.038829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.038838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.039211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.039455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.039468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.039902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.040377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.040389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.040891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.041347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.041357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.041806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.042039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.042048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.042528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.042987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.042996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.043421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.043708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.043721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.044212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.044592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.044602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.044841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.045188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.045199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.045430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.045777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.045786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.046240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.046610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.046620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.046993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.047374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.047384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.047865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.048326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.048336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.048789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.049251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.049261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.668 [2024-07-26 13:44:18.049710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.050047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.668 [2024-07-26 13:44:18.050056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.668 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.050515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.050766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.050781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.051234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.051674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.051683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.052136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.052509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.052518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.052966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.053425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.053434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.053881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.054339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.054349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.054814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.055291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.055310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.055806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.056139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.056149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.056650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.056906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.056915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.057137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.057596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.057607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.057853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.058087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.058098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.058709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.059071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.059081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.059530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.059765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.059775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.060257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.060736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.060746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.061198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.061666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.061677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.062135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.062648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.062658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.063126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.063594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.063604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.064053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.064604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.064641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.064995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.065549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.065586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.066090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.066576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.066588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.066839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.067290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.067300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.067750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.068211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.068222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.068740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.068995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.069006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.069457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.069918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.069928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.070442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.070905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.070915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.071138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.071607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.071617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.072066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.072435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.072472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 [2024-07-26 13:44:18.072977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.073528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.669 [2024-07-26 13:44:18.073566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.669 qpair failed and we were unable to recover it. 00:33:20.669 13:44:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.669 [2024-07-26 13:44:18.074076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 13:44:18 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:20.670 [2024-07-26 13:44:18.074637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.074675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.670 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.670 [2024-07-26 13:44:18.075183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.075736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.075774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.076388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.076878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.076891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.077398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.077905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.077917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.078478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.078732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.078745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.079240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.079717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.079728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.080198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.080450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.080466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.080950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.081432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.081442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.081810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.082269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.082279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.082730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.083062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.083077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.083309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.083820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.083830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.084283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.084790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.084800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.085254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.085729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.085738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.086186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.086421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.086431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.086639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.087143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.087153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.087618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.088083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.088092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.088336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.088814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.088823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.089297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.089638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.089647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.089908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 Malloc0 00:33:20.670 [2024-07-26 13:44:18.090433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.090443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.670 [2024-07-26 13:44:18.090821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 13:44:18 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:20.670 [2024-07-26 13:44:18.091193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.091212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.670 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.670 [2024-07-26 13:44:18.091681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.092052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.092061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.092506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.092674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.670 [2024-07-26 13:44:18.092687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.670 qpair failed and we were unable to recover it. 00:33:20.670 [2024-07-26 13:44:18.093245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.093721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.093730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.093982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.094230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.094240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.094792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.095246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.095256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.095725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.096268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.096278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.096790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.097257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.097267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.097494] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.671 [2024-07-26 13:44:18.097733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.098215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.098225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.098580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.099039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.099049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.099521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.099988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.099998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.100451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.100914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.100923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.101476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.101968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.101982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.102541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.103034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.103047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.103598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.103946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.103959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.104535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.105033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.105046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.105647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.106146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.106159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.671 [2024-07-26 13:44:18.106624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 13:44:18 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:20.671 [2024-07-26 13:44:18.107116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.107130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.671 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.671 [2024-07-26 13:44:18.107689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.108145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.108155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.108713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.109216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.109235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.109719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.110181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.110191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.110670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.111132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.111141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.111450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.111942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.111955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.112182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.112662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.112673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.113131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.113578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.113615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.113999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.114566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.114603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.115114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.115575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.115586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.115836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.116318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.116328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.116782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.117242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.671 [2024-07-26 13:44:18.117252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.671 qpair failed and we were unable to recover it. 00:33:20.671 [2024-07-26 13:44:18.117750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.118218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.118227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.672 [2024-07-26 13:44:18.118708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 13:44:18 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:20.672 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.672 [2024-07-26 13:44:18.119170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.119179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.672 [2024-07-26 13:44:18.119659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.119886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.119896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.120114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.120576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.120586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.121039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.121543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.121553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.122001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.122575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.122613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.123076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.123500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.123537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.124040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.124602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.124639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.125154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.125723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.125761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.126414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.126908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.672 [2024-07-26 13:44:18.126921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.672 qpair failed and we were unable to recover it. 00:33:20.672 [2024-07-26 13:44:18.127470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.127962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.127977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.128577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.129084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.129097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.129559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.130037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.130047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.130434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.934 [2024-07-26 13:44:18.130796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.130809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 13:44:18 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.934 [2024-07-26 13:44:18.131179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.934 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.934 [2024-07-26 13:44:18.131659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.131669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.132191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.132621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.132658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.132944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.133517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.133554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.134063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.134632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.134669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.135181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.135491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.135528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.934 qpair failed and we were unable to recover it. 00:33:20.934 [2024-07-26 13:44:18.136032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.934 [2024-07-26 13:44:18.136601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.935 [2024-07-26 13:44:18.136643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.137153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.935 [2024-07-26 13:44:18.137652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.935 [2024-07-26 13:44:18.137689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812010 with addr=10.0.0.2, port=4420 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.137763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.935 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.935 13:44:18 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:20.935 13:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.935 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:33:20.935 [2024-07-26 13:44:18.148369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.148498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.148518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.148526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.148533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.148554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 13:44:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.935 13:44:18 -- host/target_disconnect.sh@58 -- # wait 1189560 00:33:20.935 [2024-07-26 13:44:18.158303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.158417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.158434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.158442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.158448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.158464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.168351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.168459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.168476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.168483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.168489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.168505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.178372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.178485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.178507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.178514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.178520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.178535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.188362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.188480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.188498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.188505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.188511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.188527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.198355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.198462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.198480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.198487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.198493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.198508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.208382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.208501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.208518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.208525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.208531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.208546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.218441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.218556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.218573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.218581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.218586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.218605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.228468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.228588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.228605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.228612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.228618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.228633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.238481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.238592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.238611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.238618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.238625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.238640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.248520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.248624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.248641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.935 [2024-07-26 13:44:18.248648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.935 [2024-07-26 13:44:18.248654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.935 [2024-07-26 13:44:18.248669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.935 qpair failed and we were unable to recover it. 00:33:20.935 [2024-07-26 13:44:18.258522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.935 [2024-07-26 13:44:18.258631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.935 [2024-07-26 13:44:18.258647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.258654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.258660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.258675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.268548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.268660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.268684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.268691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.268697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.268713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.278575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.278692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.278710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.278718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.278724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.278739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.288646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.288760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.288776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.288783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.288790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.288805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.298652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.298760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.298778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.298786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.298792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.298808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.308728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.308848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.308874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.308882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.308893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.308914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.318603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.318730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.318756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.318764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.318771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.318791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.328794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.328922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.328940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.328947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.328953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.328968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.338744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.338869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.338895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.338904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.338911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.338930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.348689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.348817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.348835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.348842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.348848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.348864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.358878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.359025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.359052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.359060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.359067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.359087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.368877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.369029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.369047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.369054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.369061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.369076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.378875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.379024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.936 [2024-07-26 13:44:18.379042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.936 [2024-07-26 13:44:18.379049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.936 [2024-07-26 13:44:18.379055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.936 [2024-07-26 13:44:18.379071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.936 qpair failed and we were unable to recover it. 00:33:20.936 [2024-07-26 13:44:18.388900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.936 [2024-07-26 13:44:18.389015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.937 [2024-07-26 13:44:18.389032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.937 [2024-07-26 13:44:18.389040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.937 [2024-07-26 13:44:18.389046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.937 [2024-07-26 13:44:18.389061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.937 qpair failed and we were unable to recover it. 00:33:20.937 [2024-07-26 13:44:18.399018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.937 [2024-07-26 13:44:18.399124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.937 [2024-07-26 13:44:18.399142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.937 [2024-07-26 13:44:18.399149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.937 [2024-07-26 13:44:18.399159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:20.937 [2024-07-26 13:44:18.399174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.937 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.409071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.409238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.409256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.409263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.409269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.409285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.419037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.419179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.419197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.419209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.419215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.419230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.429098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.429240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.429258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.429265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.429271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.429286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.439046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.439157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.439174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.439181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.439187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.439209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.449095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.449213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.449231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.449238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.449244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.449260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.459126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.459240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.459257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.459264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.459270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.459285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.469129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.469251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.469268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.469275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.469281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.469296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.479133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.479252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.479269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.479276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.479282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.479297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.489164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.489326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.489344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.489351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.489361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.489377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.499093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.499205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.499222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.499229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.499235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.499250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.509238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.509352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.509369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.509376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.509382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.509398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.519305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.519452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.519469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.519477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.199 [2024-07-26 13:44:18.519483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.199 [2024-07-26 13:44:18.519499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.199 qpair failed and we were unable to recover it. 00:33:21.199 [2024-07-26 13:44:18.529302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.199 [2024-07-26 13:44:18.529419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.199 [2024-07-26 13:44:18.529437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.199 [2024-07-26 13:44:18.529444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.529450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.529469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.539320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.539431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.539449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.539456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.539462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.539478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.549346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.549454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.549471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.549478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.549484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.549499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.559364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.559478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.559495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.559502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.559508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.559523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.569440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.569568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.569586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.569593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.569599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.569614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.579449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.579561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.579578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.579584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.579594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.579610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.589494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.589747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.589766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.589773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.589779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.589794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.599496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.599604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.599621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.599628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.599634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.599649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.609527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.609640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.609658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.609666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.609675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.609691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.619541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.619651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.619669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.619676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.619682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.619697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.629563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.629678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.629695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.629702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.629710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.629725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.639597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.639702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.639719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.639726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.639732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.639747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.649622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.649729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.200 [2024-07-26 13:44:18.649745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.200 [2024-07-26 13:44:18.649752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.200 [2024-07-26 13:44:18.649758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.200 [2024-07-26 13:44:18.649773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.200 qpair failed and we were unable to recover it. 00:33:21.200 [2024-07-26 13:44:18.659639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.200 [2024-07-26 13:44:18.659745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.201 [2024-07-26 13:44:18.659762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.201 [2024-07-26 13:44:18.659769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.201 [2024-07-26 13:44:18.659775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.201 [2024-07-26 13:44:18.659790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.201 qpair failed and we were unable to recover it. 00:33:21.201 [2024-07-26 13:44:18.669750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.201 [2024-07-26 13:44:18.669873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.201 [2024-07-26 13:44:18.669899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.201 [2024-07-26 13:44:18.669912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.201 [2024-07-26 13:44:18.669919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.201 [2024-07-26 13:44:18.669939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.201 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.679645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.679800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.679834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.679842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.679850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.679870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.689745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.689863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.689881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.689888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.689894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.689911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.699776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.699892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.699918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.699926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.699933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.699952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.709853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.709978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.710003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.710012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.710018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.710038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.719756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.719877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.719903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.719911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.719918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.719938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.729880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.729990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.730008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.730015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.730021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.730037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.739809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.739928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.739946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.463 [2024-07-26 13:44:18.739953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.463 [2024-07-26 13:44:18.739959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.463 [2024-07-26 13:44:18.739974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.463 qpair failed and we were unable to recover it. 00:33:21.463 [2024-07-26 13:44:18.749881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.463 [2024-07-26 13:44:18.749999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.463 [2024-07-26 13:44:18.750025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.750033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.750039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.750060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.759935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.760040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.760059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.760070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.760077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.760093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.769981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.770106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.770124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.770132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.770138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.770154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.780052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.780196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.780220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.780227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.780233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.780249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.790108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.790225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.790243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.790249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.790255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.790271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.800069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.800173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.800191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.800198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.800210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.800226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.810122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.810232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.810250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.810257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.810263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.810278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.820170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.820413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.820432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.820439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.820445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.820460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.830189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.830317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.830334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.830341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.830347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.830362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.840087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.840198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.840221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.840229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.840235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.840250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.850224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.850336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.850353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.850363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.850369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.850385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.860250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.860364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.860381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.860388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.860395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.860409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.870272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.464 [2024-07-26 13:44:18.870390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.464 [2024-07-26 13:44:18.870407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.464 [2024-07-26 13:44:18.870414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.464 [2024-07-26 13:44:18.870420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.464 [2024-07-26 13:44:18.870436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.464 qpair failed and we were unable to recover it. 00:33:21.464 [2024-07-26 13:44:18.880397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.880506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.880523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.880530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.880537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.880552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.465 [2024-07-26 13:44:18.890381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.890508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.890526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.890532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.890538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.890553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.465 [2024-07-26 13:44:18.900350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.900456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.900474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.900481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.900487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.900502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.465 [2024-07-26 13:44:18.910418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.910569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.910586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.910593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.910599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.910614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.465 [2024-07-26 13:44:18.920438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.920549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.920566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.920573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.920579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.920594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.465 [2024-07-26 13:44:18.930372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.465 [2024-07-26 13:44:18.930492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.465 [2024-07-26 13:44:18.930509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.465 [2024-07-26 13:44:18.930516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.465 [2024-07-26 13:44:18.930522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.465 [2024-07-26 13:44:18.930537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.465 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.940481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.940589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.940607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.940618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.940624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.940638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.950483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.950600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.950618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.950625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.950631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.950646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.960514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.960626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.960644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.960650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.960656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.960671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.970445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.970563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.970580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.970588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.970594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.970609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.980612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.980720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.980737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.980744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.980750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.980765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:18.990593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:18.990767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:18.990794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:18.990803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:18.990809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:18.990829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:19.000633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:19.000747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:19.000773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:19.000781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:19.000788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:19.000807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:19.010641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:19.010761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:19.010787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:19.010796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:19.010803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:19.010823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:19.020674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:19.020785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:19.020803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:19.020810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:19.020816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:19.020833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:19.030741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:19.030865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:19.030896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.848 [2024-07-26 13:44:19.030905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.848 [2024-07-26 13:44:19.030911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.848 [2024-07-26 13:44:19.030931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.848 qpair failed and we were unable to recover it. 00:33:21.848 [2024-07-26 13:44:19.040744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.848 [2024-07-26 13:44:19.040856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.848 [2024-07-26 13:44:19.040882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.040891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.040898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.040918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.050753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.050869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.050895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.050903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.050909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.050930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.060686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.060798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.060817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.060824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.060830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.060847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.070806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.070928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.070946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.070953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.070959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.070974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.080828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.080960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.080977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.080984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.080990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.081006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.090879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.090985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.091002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.091009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.091015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.091031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.100926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.101037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.101055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.101062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.101068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.101084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.110831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.110948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.110966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.110973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.110980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.110996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.120951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.121070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.121094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.121102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.121108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.121123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.130975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.131082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.131099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.131106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.131112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.131128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.141079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.141185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.141208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.141216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.141223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.141238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.150934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.151047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.151063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.151070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.151076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.151091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.849 [2024-07-26 13:44:19.161045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.849 [2024-07-26 13:44:19.161150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.849 [2024-07-26 13:44:19.161167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.849 [2024-07-26 13:44:19.161175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.849 [2024-07-26 13:44:19.161181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.849 [2024-07-26 13:44:19.161195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.849 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.171163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.171290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.171308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.171315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.171321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.171337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.181198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.181329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.181346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.181353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.181359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.181374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.191162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.191283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.191300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.191307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.191313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.191328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.201068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.201310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.201328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.201335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.201341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.201355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.211208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.211315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.211335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.211343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.211349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.211364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.221241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.221397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.221420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.221428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.221434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.221449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.231288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.231397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.231414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.231421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.231427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.231443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.241297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.241415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.241433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.241440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.241447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.241461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.251311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.251420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.251437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.251445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.251451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.251470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.261372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.261483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.261500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.261507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.261514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.261528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.271373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.271488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.271505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.271512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.271518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.271534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.281403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.281512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.281530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.281537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.850 [2024-07-26 13:44:19.281543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.850 [2024-07-26 13:44:19.281558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.850 qpair failed and we were unable to recover it. 00:33:21.850 [2024-07-26 13:44:19.291473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.850 [2024-07-26 13:44:19.291582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.850 [2024-07-26 13:44:19.291600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.850 [2024-07-26 13:44:19.291607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.851 [2024-07-26 13:44:19.291613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.851 [2024-07-26 13:44:19.291628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.851 qpair failed and we were unable to recover it. 00:33:21.851 [2024-07-26 13:44:19.301503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.851 [2024-07-26 13:44:19.301612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.851 [2024-07-26 13:44:19.301634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.851 [2024-07-26 13:44:19.301642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.851 [2024-07-26 13:44:19.301648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:21.851 [2024-07-26 13:44:19.301664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.851 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.311586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.311702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.311720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.311727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.311734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.311750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.321544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.321655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.321673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.321680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.321686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.321702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.331562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.331675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.331692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.331700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.331707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.331722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.341620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.341726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.341743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.341750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.341756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.341775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.351645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.351762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.351788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.351797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.351804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.351825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.361820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.361936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.361962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.361970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.361977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.361998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.371720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.371843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.371870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.371878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.371885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.371905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.381641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.381782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.381808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.381817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.381824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.381844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.391744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.135 [2024-07-26 13:44:19.391866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.135 [2024-07-26 13:44:19.391896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.135 [2024-07-26 13:44:19.391905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.135 [2024-07-26 13:44:19.391913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.135 [2024-07-26 13:44:19.391933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.135 qpair failed and we were unable to recover it. 00:33:22.135 [2024-07-26 13:44:19.401767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.401885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.401911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.401920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.401928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.401948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.411833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.411950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.411977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.411985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.411992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.412011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.421814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.421923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.421941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.421949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.421955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.421971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.431841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.431976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.432004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.432013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.432020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.432045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.441774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.441932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.441953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.441961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.441968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.441984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.451887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.452013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.452039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.452049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.452056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.452076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.461970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.462082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.462101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.462108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.462114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.462131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.471968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.472086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.472104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.472112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.472118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.472134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.481977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.482086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.482108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.482115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.482121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.482136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.492093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.492238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.492256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.492263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.492269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.492285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.502066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.502179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.502196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.502212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.502218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.502234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.512009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.512121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.512138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.512146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.512152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.512168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.136 [2024-07-26 13:44:19.522076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.136 [2024-07-26 13:44:19.522180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.136 [2024-07-26 13:44:19.522198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.136 [2024-07-26 13:44:19.522212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.136 [2024-07-26 13:44:19.522221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.136 [2024-07-26 13:44:19.522237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.136 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.532123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.532239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.532256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.532263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.532269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.532285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.542139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.542257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.542275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.542282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.542288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.542304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.552106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.552221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.552239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.552246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.552252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.552268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.562190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.562299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.562316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.562324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.562329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.562345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.572222] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.572340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.572358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.572365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.572372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.572387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.582264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.582373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.582390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.582397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.582403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.582419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.592293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.592403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.592419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.592426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.592433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.592448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.137 [2024-07-26 13:44:19.602314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.137 [2024-07-26 13:44:19.602427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.137 [2024-07-26 13:44:19.602445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.137 [2024-07-26 13:44:19.602452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.137 [2024-07-26 13:44:19.602458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.137 [2024-07-26 13:44:19.602474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.137 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.612359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.612461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.612479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.612486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.612497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.612513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.622386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.622492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.622509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.622516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.622523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.622538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.632451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.632565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.632583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.632590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.632596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.632611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.642442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.642552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.642570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.642577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.642583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.642598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.652482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.652586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.652605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.652612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.652618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.652633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.662674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.662791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.662808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.662815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.662821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.662836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.672532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.672666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.672693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.672702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.672711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.672732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.682465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.682577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.682596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.682604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.682610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.682627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.692566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.692676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.692694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.692701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.692707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.692723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.702663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.702803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.702823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.702830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.702842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.702859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.712638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.712750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.712768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.712775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.712781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.712796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.400 [2024-07-26 13:44:19.722674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.400 [2024-07-26 13:44:19.722941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.400 [2024-07-26 13:44:19.722969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.400 [2024-07-26 13:44:19.722978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.400 [2024-07-26 13:44:19.722984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.400 [2024-07-26 13:44:19.723003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.400 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.732699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.732822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.732849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.732858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.732865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.732885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.742757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.742869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.742896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.742904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.742911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.742931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.752780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.752902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.752929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.752938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.752945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.752965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.762765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.762876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.762894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.762902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.762908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.762925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.772798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.772910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.772929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.772937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.772943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.772960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.782840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.782948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.782965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.782973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.782980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.782996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.792903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.793048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.793077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.793086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.793098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.793119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.802913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.803021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.803041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.803048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.803054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.803070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.812936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.813048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.813066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.813074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.813080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.813096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.822981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.823093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.823110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.823117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.823123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.823139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.833045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.833162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.833180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.833186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.833193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.833214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.843012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.401 [2024-07-26 13:44:19.843170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.401 [2024-07-26 13:44:19.843189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.401 [2024-07-26 13:44:19.843196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.401 [2024-07-26 13:44:19.843210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.401 [2024-07-26 13:44:19.843228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.401 qpair failed and we were unable to recover it. 00:33:22.401 [2024-07-26 13:44:19.853069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.402 [2024-07-26 13:44:19.853224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.402 [2024-07-26 13:44:19.853242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.402 [2024-07-26 13:44:19.853249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.402 [2024-07-26 13:44:19.853256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.402 [2024-07-26 13:44:19.853271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.402 qpair failed and we were unable to recover it. 00:33:22.402 [2024-07-26 13:44:19.863074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.402 [2024-07-26 13:44:19.863216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.402 [2024-07-26 13:44:19.863234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.402 [2024-07-26 13:44:19.863241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.402 [2024-07-26 13:44:19.863247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.402 [2024-07-26 13:44:19.863263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.402 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.873142] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.873262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.663 [2024-07-26 13:44:19.873280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.663 [2024-07-26 13:44:19.873288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.663 [2024-07-26 13:44:19.873294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.663 [2024-07-26 13:44:19.873309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.663 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.883133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.883279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.663 [2024-07-26 13:44:19.883298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.663 [2024-07-26 13:44:19.883309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.663 [2024-07-26 13:44:19.883316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.663 [2024-07-26 13:44:19.883332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.663 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.893159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.893272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.663 [2024-07-26 13:44:19.893290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.663 [2024-07-26 13:44:19.893298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.663 [2024-07-26 13:44:19.893304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.663 [2024-07-26 13:44:19.893319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.663 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.903221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.903327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.663 [2024-07-26 13:44:19.903344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.663 [2024-07-26 13:44:19.903351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.663 [2024-07-26 13:44:19.903358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.663 [2024-07-26 13:44:19.903373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.663 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.913139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.913277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.663 [2024-07-26 13:44:19.913295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.663 [2024-07-26 13:44:19.913303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.663 [2024-07-26 13:44:19.913309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.663 [2024-07-26 13:44:19.913324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.663 qpair failed and we were unable to recover it. 00:33:22.663 [2024-07-26 13:44:19.923231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.663 [2024-07-26 13:44:19.923337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.923354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.923361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.923367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.923383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.933316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.933468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.933485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.933493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.933499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.933514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.943316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.943426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.943443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.943451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.943457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.943473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.953324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.953437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.953454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.953462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.953469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.953484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.963342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.963455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.963472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.963479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.963486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.963501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.973389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.973546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.973564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.973574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.973580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.973595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.983443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.983556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.983574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.983581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.983587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.983602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:19.993445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:19.993555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:19.993572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:19.993580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:19.993586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:19.993601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:20.003464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:20.003577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:20.003595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:20.003602] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:20.003608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:20.003623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:20.013607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:20.013724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:20.013741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:20.013749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:20.013755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:20.013770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:20.023418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:20.023530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:20.023547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:20.023554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:20.023560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:20.023577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:20.033552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:20.033668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:20.033685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:20.033693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.664 [2024-07-26 13:44:20.033699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.664 [2024-07-26 13:44:20.033715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.664 qpair failed and we were unable to recover it. 00:33:22.664 [2024-07-26 13:44:20.043471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.664 [2024-07-26 13:44:20.043591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.664 [2024-07-26 13:44:20.043609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.664 [2024-07-26 13:44:20.043616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.043623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.043638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.053780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.053890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.053908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.053915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.053921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.053936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.063647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.063769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.063798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.063812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.063819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.063840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.073679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.073798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.073825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.073834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.073841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.073861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.083687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.083806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.083832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.083841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.083848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.083868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.093628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.093747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.093774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.093783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.093789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.093810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.103660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.103779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.103805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.103814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.103820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.103840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.113669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.113792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.113819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.113828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.113834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.113855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.123827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.123969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.123988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.123995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.124001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.124018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.665 [2024-07-26 13:44:20.133802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.665 [2024-07-26 13:44:20.133920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.665 [2024-07-26 13:44:20.133947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.665 [2024-07-26 13:44:20.133956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.665 [2024-07-26 13:44:20.133963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.665 [2024-07-26 13:44:20.133983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.665 qpair failed and we were unable to recover it. 00:33:22.927 [2024-07-26 13:44:20.143808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.927 [2024-07-26 13:44:20.143924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.927 [2024-07-26 13:44:20.143951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.927 [2024-07-26 13:44:20.143960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.927 [2024-07-26 13:44:20.143967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.927 [2024-07-26 13:44:20.143987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.927 qpair failed and we were unable to recover it. 00:33:22.927 [2024-07-26 13:44:20.153904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.927 [2024-07-26 13:44:20.154026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.927 [2024-07-26 13:44:20.154053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.927 [2024-07-26 13:44:20.154066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.154073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.154094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.163899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.164009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.164028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.164035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.164042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.164058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.173955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.174070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.174088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.174095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.174102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.174117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.183985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.184091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.184109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.184117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.184123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.184139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.194016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.194134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.194152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.194159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.194166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.194181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.203991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.204100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.204117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.204125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.204131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.204146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.214052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.214163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.214180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.214188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.214194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.214215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.224087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.224197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.224219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.224226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.224233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.224249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.234000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.234137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.234155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.234162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.234168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.234184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.244127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.244240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.244261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.244268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.244274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.244291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.254206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.254310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.254327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.254336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.254343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.254358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.264208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.264319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.264337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.264344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.264350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.264366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.928 [2024-07-26 13:44:20.274259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.928 [2024-07-26 13:44:20.274378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.928 [2024-07-26 13:44:20.274396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.928 [2024-07-26 13:44:20.274403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.928 [2024-07-26 13:44:20.274409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.928 [2024-07-26 13:44:20.274424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.928 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.284208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.284317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.284334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.284342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.284348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.284364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.294310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.294420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.294438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.294445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.294451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.294467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.304373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.304496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.304514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.304521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.304527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.304542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.314369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.314502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.314519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.314527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.314533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.314548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.324298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.324396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.324414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.324421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.324427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.324443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.334474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.334582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.334603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.334610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.334616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.334631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.344452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.344559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.344576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.344584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.344590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.344605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.354399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.354525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.354544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.354553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.354560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.354577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.364492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.364607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.364625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.364632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.364638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.364654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.374525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.374638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.374655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.374663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.374669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.374687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.384579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.384697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.384714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.384721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.384727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.384742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:22.929 [2024-07-26 13:44:20.394575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.929 [2024-07-26 13:44:20.394691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.929 [2024-07-26 13:44:20.394709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.929 [2024-07-26 13:44:20.394717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.929 [2024-07-26 13:44:20.394723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:22.929 [2024-07-26 13:44:20.394738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.929 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.404543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.192 [2024-07-26 13:44:20.404660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.192 [2024-07-26 13:44:20.404677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.192 [2024-07-26 13:44:20.404685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.192 [2024-07-26 13:44:20.404691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.192 [2024-07-26 13:44:20.404706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.192 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.414765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.192 [2024-07-26 13:44:20.414888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.192 [2024-07-26 13:44:20.414907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.192 [2024-07-26 13:44:20.414915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.192 [2024-07-26 13:44:20.414921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.192 [2024-07-26 13:44:20.414937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.192 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.424715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.192 [2024-07-26 13:44:20.424841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.192 [2024-07-26 13:44:20.424872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.192 [2024-07-26 13:44:20.424882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.192 [2024-07-26 13:44:20.424889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.192 [2024-07-26 13:44:20.424909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.192 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.434766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.192 [2024-07-26 13:44:20.434884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.192 [2024-07-26 13:44:20.434911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.192 [2024-07-26 13:44:20.434920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.192 [2024-07-26 13:44:20.434927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.192 [2024-07-26 13:44:20.434946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.192 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.444681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.192 [2024-07-26 13:44:20.444797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.192 [2024-07-26 13:44:20.444824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.192 [2024-07-26 13:44:20.444833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.192 [2024-07-26 13:44:20.444839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.192 [2024-07-26 13:44:20.444859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.192 qpair failed and we were unable to recover it. 00:33:23.192 [2024-07-26 13:44:20.454686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.454794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.454813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.454820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.454827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.454843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.464787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.464900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.464917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.464925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.464931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.464951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.474839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.474958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.474976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.474983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.474990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.475006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.484796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.484904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.484922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.484930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.484936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.484951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.494879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.495023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.495041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.495048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.495054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.495069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.504899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.505006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.505024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.505031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.505038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.505053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.514957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.515085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.515106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.515113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.515119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.515135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.524905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.525010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.525027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.525035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.525041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.525056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.534939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.535040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.535058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.535065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.535071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.535086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.545001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.545113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.545130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.545138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.545144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.545159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.554950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.555067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.555084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.555091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.555097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.555116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.565095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.565218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.565237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.565245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.565254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.565271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.193 qpair failed and we were unable to recover it. 00:33:23.193 [2024-07-26 13:44:20.575048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.193 [2024-07-26 13:44:20.575145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.193 [2024-07-26 13:44:20.575164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.193 [2024-07-26 13:44:20.575171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.193 [2024-07-26 13:44:20.575177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.193 [2024-07-26 13:44:20.575193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.585028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.585131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.585148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.585155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.585162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.585177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.595024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.595171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.595188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.595196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.595207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.595223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.605128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.605263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.605288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.605295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.605302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.605318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.615164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.615273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.615291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.615298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.615305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.615321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.625192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.625304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.625322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.625330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.625336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.625351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.635247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.635358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.635376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.635383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.635389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.635405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.645281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.645388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.645405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.645412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.645419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.645438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.194 [2024-07-26 13:44:20.655292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.194 [2024-07-26 13:44:20.655393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.194 [2024-07-26 13:44:20.655410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.194 [2024-07-26 13:44:20.655418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.194 [2024-07-26 13:44:20.655424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.194 [2024-07-26 13:44:20.655440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.194 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.665186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.665322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.665340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.665347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.665354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.665370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.675377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.675492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.675510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.675518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.675524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.675540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.685334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.685433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.685451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.685458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.685464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.685480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.695373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.695509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.695530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.695537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.695543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.695559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.705292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.705419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.705437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.705443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.705450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.705465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.715467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.715583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.715601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.715608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.715615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.715630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.725450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.457 [2024-07-26 13:44:20.725556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.457 [2024-07-26 13:44:20.725573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.457 [2024-07-26 13:44:20.725581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.457 [2024-07-26 13:44:20.725588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.457 [2024-07-26 13:44:20.725603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.457 qpair failed and we were unable to recover it. 00:33:23.457 [2024-07-26 13:44:20.735517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.735648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.735666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.735673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.735683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.735698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.745513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.745627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.745645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.745651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.745658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.745673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.755545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.755645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.755662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.755669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.755676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.755691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.765524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.765627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.765644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.765652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.765658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.765673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.775603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.775738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.775756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.775763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.775769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.775784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.785609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.785721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.785747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.785756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.785762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.785782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.795567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.795675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.795695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.795703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.795710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.795727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.805668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.805775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.805793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.805801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.805807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.805823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.815718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.815838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.815855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.815863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.815869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.815885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.825747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.825849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.825867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.825875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.825885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.825901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.835792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.835956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.835973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.835980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.835987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.836001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.845821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.845932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.845958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.845966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.845973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.845994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.855849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.855954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.855973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.855980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.855987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.856003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.865831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.865931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.865949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.865956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.865963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.865978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.875885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.875993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.876011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.458 [2024-07-26 13:44:20.876019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.458 [2024-07-26 13:44:20.876026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.458 [2024-07-26 13:44:20.876042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.458 qpair failed and we were unable to recover it. 00:33:23.458 [2024-07-26 13:44:20.885866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.458 [2024-07-26 13:44:20.885981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.458 [2024-07-26 13:44:20.886008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.459 [2024-07-26 13:44:20.886016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.459 [2024-07-26 13:44:20.886023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.459 [2024-07-26 13:44:20.886043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.459 qpair failed and we were unable to recover it. 00:33:23.459 [2024-07-26 13:44:20.895945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.459 [2024-07-26 13:44:20.896055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.459 [2024-07-26 13:44:20.896082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.459 [2024-07-26 13:44:20.896091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.459 [2024-07-26 13:44:20.896098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.459 [2024-07-26 13:44:20.896118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.459 qpair failed and we were unable to recover it. 00:33:23.459 [2024-07-26 13:44:20.905969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.459 [2024-07-26 13:44:20.906073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.459 [2024-07-26 13:44:20.906092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.459 [2024-07-26 13:44:20.906100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.459 [2024-07-26 13:44:20.906106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.459 [2024-07-26 13:44:20.906123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.459 qpair failed and we were unable to recover it. 00:33:23.459 [2024-07-26 13:44:20.915981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.459 [2024-07-26 13:44:20.916084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.459 [2024-07-26 13:44:20.916102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.459 [2024-07-26 13:44:20.916109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.459 [2024-07-26 13:44:20.916121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.459 [2024-07-26 13:44:20.916137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.459 qpair failed and we were unable to recover it. 00:33:23.459 [2024-07-26 13:44:20.925968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.459 [2024-07-26 13:44:20.926070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.459 [2024-07-26 13:44:20.926088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.459 [2024-07-26 13:44:20.926095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.459 [2024-07-26 13:44:20.926101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.459 [2024-07-26 13:44:20.926117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.459 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.935916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.936034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.936051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.936059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.936066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.936081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.945938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.946040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.946057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.946064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.946071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.946086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.956097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.956209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.956227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.956235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.956241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.956258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.966189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.966350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.966367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.966374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.966381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.966396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.976155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.976263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.976281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.976289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.976295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.976311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.986183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.986339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.986357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.986364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.986371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.986386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:20.996204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:20.996356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:20.996374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:20.996381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:20.996388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:20.996403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:21.006087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:21.006189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:21.006213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:21.006220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:21.006230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:21.006245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:21.016303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:21.016411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:21.016429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:21.016436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:21.016442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:21.016458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.722 qpair failed and we were unable to recover it. 00:33:23.722 [2024-07-26 13:44:21.026271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.722 [2024-07-26 13:44:21.026359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.722 [2024-07-26 13:44:21.026376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.722 [2024-07-26 13:44:21.026383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.722 [2024-07-26 13:44:21.026390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.722 [2024-07-26 13:44:21.026404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.036268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.036378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.036395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.036402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.036409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.036425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.046308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.046410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.046427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.046434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.046441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.046457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.056344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.056445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.056462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.056470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.056476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.056492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.066392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.066493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.066511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.066518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.066525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.066539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.076417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.076529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.076547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.076554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.076560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.076576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.086440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.086548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.086565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.086572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.086579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.086594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.096480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.096588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.096606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.096617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.096624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.096639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.106495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.106595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.106613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.106620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.106626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.106641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.116411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.116514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.116532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.116539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.116545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.116561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.126523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.126625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.126642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.126650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.126656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.126671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.136479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.136585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.136602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.136609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.136615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.136631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.146585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.146686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.146703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.146710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.146717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.723 [2024-07-26 13:44:21.146732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.723 qpair failed and we were unable to recover it. 00:33:23.723 [2024-07-26 13:44:21.156626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.723 [2024-07-26 13:44:21.156729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.723 [2024-07-26 13:44:21.156746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.723 [2024-07-26 13:44:21.156753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.723 [2024-07-26 13:44:21.156759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.724 [2024-07-26 13:44:21.156775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.724 qpair failed and we were unable to recover it. 00:33:23.724 [2024-07-26 13:44:21.166808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.724 [2024-07-26 13:44:21.166915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.724 [2024-07-26 13:44:21.166942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.724 [2024-07-26 13:44:21.166950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.724 [2024-07-26 13:44:21.166957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.724 [2024-07-26 13:44:21.166977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.724 qpair failed and we were unable to recover it. 00:33:23.724 [2024-07-26 13:44:21.176664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.724 [2024-07-26 13:44:21.176772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.724 [2024-07-26 13:44:21.176798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.724 [2024-07-26 13:44:21.176807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.724 [2024-07-26 13:44:21.176814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.724 [2024-07-26 13:44:21.176834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.724 qpair failed and we were unable to recover it. 00:33:23.724 [2024-07-26 13:44:21.186696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.724 [2024-07-26 13:44:21.186805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.724 [2024-07-26 13:44:21.186831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.724 [2024-07-26 13:44:21.186845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.724 [2024-07-26 13:44:21.186852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.724 [2024-07-26 13:44:21.186872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.724 qpair failed and we were unable to recover it. 00:33:23.986 [2024-07-26 13:44:21.196762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.986 [2024-07-26 13:44:21.196872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.986 [2024-07-26 13:44:21.196898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.986 [2024-07-26 13:44:21.196907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.986 [2024-07-26 13:44:21.196914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.986 [2024-07-26 13:44:21.196935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.986 qpair failed and we were unable to recover it. 00:33:23.986 [2024-07-26 13:44:21.206738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.986 [2024-07-26 13:44:21.206846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.986 [2024-07-26 13:44:21.206872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.986 [2024-07-26 13:44:21.206881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.986 [2024-07-26 13:44:21.206888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.986 [2024-07-26 13:44:21.206908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.986 qpair failed and we were unable to recover it. 00:33:23.986 [2024-07-26 13:44:21.216701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.986 [2024-07-26 13:44:21.216814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.986 [2024-07-26 13:44:21.216841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.986 [2024-07-26 13:44:21.216849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.986 [2024-07-26 13:44:21.216856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.986 [2024-07-26 13:44:21.216876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.986 qpair failed and we were unable to recover it. 00:33:23.986 [2024-07-26 13:44:21.226828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.986 [2024-07-26 13:44:21.226936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.986 [2024-07-26 13:44:21.226963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.986 [2024-07-26 13:44:21.226972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.986 [2024-07-26 13:44:21.226979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.986 [2024-07-26 13:44:21.226999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.986 qpair failed and we were unable to recover it. 00:33:23.986 [2024-07-26 13:44:21.236852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.986 [2024-07-26 13:44:21.236971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.236990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.236998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.237004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.237020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.246884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.247013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.247039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.247048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.247055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.247075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.256923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.257028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.257047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.257054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.257060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.257077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.266836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.266988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.267006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.267013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.267019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.267035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.276981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.277088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.277106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.277117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.277124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.277140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.286893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.286995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.287012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.287020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.287026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.287042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.296901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.297003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.297021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.297028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.297034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.297049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.306959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.307075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.307092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.307100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.307106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.307121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.317094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.317197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.317221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.317228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.317234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.317250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.327076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.327180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.327197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.327211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.327217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.327233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.337158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.337245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.337263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.337270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.337276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.337291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.347166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.347307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.347325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.347332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.347339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.347355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.357230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.987 [2024-07-26 13:44:21.357336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.987 [2024-07-26 13:44:21.357354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.987 [2024-07-26 13:44:21.357361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.987 [2024-07-26 13:44:21.357367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.987 [2024-07-26 13:44:21.357382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.987 qpair failed and we were unable to recover it. 00:33:23.987 [2024-07-26 13:44:21.367181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.367287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.367305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.367316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.367323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.367338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.377230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.377338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.377355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.377362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.377369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.377384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.387248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.387347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.387364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.387372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.387379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.387394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.397297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.397402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.397420] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.397427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.397433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.397450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.407309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.407406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.407423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.407431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.407437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.407452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.417349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.417449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.417467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.417474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.417480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.417495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.427357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.427461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.427479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.427486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.427492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.427507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.437418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.437545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.437562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.437569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.437575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.437591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.447397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.447500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.447517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.447524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.447531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.447546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:23.988 [2024-07-26 13:44:21.457440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.988 [2024-07-26 13:44:21.457576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.988 [2024-07-26 13:44:21.457597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.988 [2024-07-26 13:44:21.457604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.988 [2024-07-26 13:44:21.457611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:23.988 [2024-07-26 13:44:21.457626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.988 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.467470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.467570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.467588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.467596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.467602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.467618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.477514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.477621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.477639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.477646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.477652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.477667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.487503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.487603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.487620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.487628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.487634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.487649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.497454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.497696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.497715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.497722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.497729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.497744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.507477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.507565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.507583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.507590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.507596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.507611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.517646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.251 [2024-07-26 13:44:21.517748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.251 [2024-07-26 13:44:21.517766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.251 [2024-07-26 13:44:21.517774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.251 [2024-07-26 13:44:21.517780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.251 [2024-07-26 13:44:21.517795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.251 qpair failed and we were unable to recover it. 00:33:24.251 [2024-07-26 13:44:21.527606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.527716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.527742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.527751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.527758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.527778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.537646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.537759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.537779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.537786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.537793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.537809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.547689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.547804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.547834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.547843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.547850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.547870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.557706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.557818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.557845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.557853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.557860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.557880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.567756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.567882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.567901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.567908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.567914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.567930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.577791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.577901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.577928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.577936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.577943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.577963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.587804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.587912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.587939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.587947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.587954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.587979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.597844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.597954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.597974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.597982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.597988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.598004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.607820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.607911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.607928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.607935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.607941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.607957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.617858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.617986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.618013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.618022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.618028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.618048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.627933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.628041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.628060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.628067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.628074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.628091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.637845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.637955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.637979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.637987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.637993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.252 [2024-07-26 13:44:21.638009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.252 qpair failed and we were unable to recover it. 00:33:24.252 [2024-07-26 13:44:21.647983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.252 [2024-07-26 13:44:21.648088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.252 [2024-07-26 13:44:21.648106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.252 [2024-07-26 13:44:21.648113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.252 [2024-07-26 13:44:21.648120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.648135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.657986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.658087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.658105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.658112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.658119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.658134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.667981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.668118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.668136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.668143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.668149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.668165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.678064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.678175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.678193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.678207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.678214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.678234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.687961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.688069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.688087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.688094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.688101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.688116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.698125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.698237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.698255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.698263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.698269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.698288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.708133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.708360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.708378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.708385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.708391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.708406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.253 [2024-07-26 13:44:21.718146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.253 [2024-07-26 13:44:21.718259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.253 [2024-07-26 13:44:21.718277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.253 [2024-07-26 13:44:21.718285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.253 [2024-07-26 13:44:21.718291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.253 [2024-07-26 13:44:21.718306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.253 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.728167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.728287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.728308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.728316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.728322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.728337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.738205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.738317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.738334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.738341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.738347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.738362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.748140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.748252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.748270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.748277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.748283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.748298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.758279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.758416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.758433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.758440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.758447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.758463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.768258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.768359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.768376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.768383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.768390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.768409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.778304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.778404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.778422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.778429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.778435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.778452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.788338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.788440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.788457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.788465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.788471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.788486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.798384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.516 [2024-07-26 13:44:21.798489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.516 [2024-07-26 13:44:21.798507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.516 [2024-07-26 13:44:21.798514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.516 [2024-07-26 13:44:21.798521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.516 [2024-07-26 13:44:21.798537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.516 qpair failed and we were unable to recover it. 00:33:24.516 [2024-07-26 13:44:21.808378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.808479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.808496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.808504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.808510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.808525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.818435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.818542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.818563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.818570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.818576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.818591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.828519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.828646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.828662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.828669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.828675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.828691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.838521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.838649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.838667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.838674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.838680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.838695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.848518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.848662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.848680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.848687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.848693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.848708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.858479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.858723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.858742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.858750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.858756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.858774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.868591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.868741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.868758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.868765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.868772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.868787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.878588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.878700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.878727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.878735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.878742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.878762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.888580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.888684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.888703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.888710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.888717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.888733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.898630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.898737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.898764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.898773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.898780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.898799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.908725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.908838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.908869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.908878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.517 [2024-07-26 13:44:21.908885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.517 [2024-07-26 13:44:21.908905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.517 qpair failed and we were unable to recover it. 00:33:24.517 [2024-07-26 13:44:21.918692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.517 [2024-07-26 13:44:21.918811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.517 [2024-07-26 13:44:21.918837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.517 [2024-07-26 13:44:21.918846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.918852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.918872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.928717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.928823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.928849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.928858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.928865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.928884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.938745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.938854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.938881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.938889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.938896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.938915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.948777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.948889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.948916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.948925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.948936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.948956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.958832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.958972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.958998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.959007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.959014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.959033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.968849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.968961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.968988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.968996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.969003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.969023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.518 [2024-07-26 13:44:21.978833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.518 [2024-07-26 13:44:21.978943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.518 [2024-07-26 13:44:21.978969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.518 [2024-07-26 13:44:21.978978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.518 [2024-07-26 13:44:21.978985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.518 [2024-07-26 13:44:21.979005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.518 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:21.988924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:21.989068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:21.989094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:21.989103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:21.989110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:21.989130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:21.998968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:21.999079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:21.999099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:21.999106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:21.999113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:21.999129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.008916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.009020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.009038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.009045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.009052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.009067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.018973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.019079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.019097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.019105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.019111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.019126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.029074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.029205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.029223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.029230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.029237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.029252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.039056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.039149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.039166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.039173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.039184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.039198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.048951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.049061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.049079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.049086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.049093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.049108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.780 qpair failed and we were unable to recover it. 00:33:24.780 [2024-07-26 13:44:22.059111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.780 [2024-07-26 13:44:22.059217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.780 [2024-07-26 13:44:22.059235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.780 [2024-07-26 13:44:22.059243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.780 [2024-07-26 13:44:22.059249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.780 [2024-07-26 13:44:22.059265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.069146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.069259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.069277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.069284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.069290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.069306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.079165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.079404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.079424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.079431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.079437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.079453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.089185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.089291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.089310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.089317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.089323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.089339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.099217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.099355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.099372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.099380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.099387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.099402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.109214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.109319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.109336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.109344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.109350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.109366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.119252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.119358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.119376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.119383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.119389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.119405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.129301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.129407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.129424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.129432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.129442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.129458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.139310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.139412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.139430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.139437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.139443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.139458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.149347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.149453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.149471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.149478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.149485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.149500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.159353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.159457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.159474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.159481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.159488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.159503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.169426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.169561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.169579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.169586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.169593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.169608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.179427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.179544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.179562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.179571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.781 [2024-07-26 13:44:22.179577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.781 [2024-07-26 13:44:22.179592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.781 qpair failed and we were unable to recover it. 00:33:24.781 [2024-07-26 13:44:22.189451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.781 [2024-07-26 13:44:22.189560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.781 [2024-07-26 13:44:22.189577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.781 [2024-07-26 13:44:22.189584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.189590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.189605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.199509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.199609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.199627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.199634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.199641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.199655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.209502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.209602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.209620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.209627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.209633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.209649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.219527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.219632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.219649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.219658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.219671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.219686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.229590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.229693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.229711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.229719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.229725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.229741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.239491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.239600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.239617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.239625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.239631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.239645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:24.782 [2024-07-26 13:44:22.249558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.782 [2024-07-26 13:44:22.249660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.782 [2024-07-26 13:44:22.249678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.782 [2024-07-26 13:44:22.249685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.782 [2024-07-26 13:44:22.249692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:24.782 [2024-07-26 13:44:22.249707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.782 qpair failed and we were unable to recover it. 00:33:25.044 [2024-07-26 13:44:22.259657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.044 [2024-07-26 13:44:22.259761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.044 [2024-07-26 13:44:22.259779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.044 [2024-07-26 13:44:22.259786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.044 [2024-07-26 13:44:22.259793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.044 [2024-07-26 13:44:22.259808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.269657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.269766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.269793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.269801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.269808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.269828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.279786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.279941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.279959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.279967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.279973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.279989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.289721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.289832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.289859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.289868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.289874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.289895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.299756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.299867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.299894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.299903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.299910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.299929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.309776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.309885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.309912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.309925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.309932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.309952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.319799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.319909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.319936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.319945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.319952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.319972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.329738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.329885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.329903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.329911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.329917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.329934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.339856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.339973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.339991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.339998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.340004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.340020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.349873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.349972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.349990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.349997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.350004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.350019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.359898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.360004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.360021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.360029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.360035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.360050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.369984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.370086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.370111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.370120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.370127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.370148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.379959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.380063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.380081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.380090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.380096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.045 [2024-07-26 13:44:22.380112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.045 qpair failed and we were unable to recover it. 00:33:25.045 [2024-07-26 13:44:22.389908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.045 [2024-07-26 13:44:22.390009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.045 [2024-07-26 13:44:22.390027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.045 [2024-07-26 13:44:22.390034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.045 [2024-07-26 13:44:22.390041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.390056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.400015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.400122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.400140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.400152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.400159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.400174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.410019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.410122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.410139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.410146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.410152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.410168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.420084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.420188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.420218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.420226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.420233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.420249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.430155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.430271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.430288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.430295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.430301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.430317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.440122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.440230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.440248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.440255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.440262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.440277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.450120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.450223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.450242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.450250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.450257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.450273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.460179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.460281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.460299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.460306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.460312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.460328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.470219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.470322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.470340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.470347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.470354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.470370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.480247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.480352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.480369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.480377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.480384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.480399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.490261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.490367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.490386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.490397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.490404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.490420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.500305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.500409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.500427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.500434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.500441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.500456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.046 [2024-07-26 13:44:22.510203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.046 [2024-07-26 13:44:22.510309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.046 [2024-07-26 13:44:22.510326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.046 [2024-07-26 13:44:22.510334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.046 [2024-07-26 13:44:22.510340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.046 [2024-07-26 13:44:22.510356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.046 qpair failed and we were unable to recover it. 00:33:25.309 [2024-07-26 13:44:22.520352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.309 [2024-07-26 13:44:22.520460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.309 [2024-07-26 13:44:22.520478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.309 [2024-07-26 13:44:22.520485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.309 [2024-07-26 13:44:22.520491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.309 [2024-07-26 13:44:22.520507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.309 qpair failed and we were unable to recover it. 00:33:25.309 [2024-07-26 13:44:22.530356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.309 [2024-07-26 13:44:22.530462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.309 [2024-07-26 13:44:22.530479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.309 [2024-07-26 13:44:22.530486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.309 [2024-07-26 13:44:22.530492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.309 [2024-07-26 13:44:22.530508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.309 qpair failed and we were unable to recover it. 00:33:25.309 [2024-07-26 13:44:22.540458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.309 [2024-07-26 13:44:22.540584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.309 [2024-07-26 13:44:22.540601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.309 [2024-07-26 13:44:22.540609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.309 [2024-07-26 13:44:22.540615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.309 [2024-07-26 13:44:22.540631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.309 qpair failed and we were unable to recover it. 00:33:25.309 [2024-07-26 13:44:22.550464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.309 [2024-07-26 13:44:22.550563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.309 [2024-07-26 13:44:22.550581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.309 [2024-07-26 13:44:22.550589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.309 [2024-07-26 13:44:22.550595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.309 [2024-07-26 13:44:22.550610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.309 qpair failed and we were unable to recover it. 00:33:25.309 [2024-07-26 13:44:22.560488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.309 [2024-07-26 13:44:22.560595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.309 [2024-07-26 13:44:22.560613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.560620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.560628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.560644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.570495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.570610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.570628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.570635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.570642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.570656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.580514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.580615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.580633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.580644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.580651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.580666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.590568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.590669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.590687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.590694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.590700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.590715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.600587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.600692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.600709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.600717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.600723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.600738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.610586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.610691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.610709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.610716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.610723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.610737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.620639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.620745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.620762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.620769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.620775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.620791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.630677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.630778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.630796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.630803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.630809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.630825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.640735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.640845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.640871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.640880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.640887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.640907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.650722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.650831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.650858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.650867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.650873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.650894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.660744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.660871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.660898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.660906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.660913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.660933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.670927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.671035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.671066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.671075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.671082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.671102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.680801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.680907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.310 [2024-07-26 13:44:22.680927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.310 [2024-07-26 13:44:22.680935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.310 [2024-07-26 13:44:22.680941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.310 [2024-07-26 13:44:22.680958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.310 qpair failed and we were unable to recover it. 00:33:25.310 [2024-07-26 13:44:22.690783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.310 [2024-07-26 13:44:22.690882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.690900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.690908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.690914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.690930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.700831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.700938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.700965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.700974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.700980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.701000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.710919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.711031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.711050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.711057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.711063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.711078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.720897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.721007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.721025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.721032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.721038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.721054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.730926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.731029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.731046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.731053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.731059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.731075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.740953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.741054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.741071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.741079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.741085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.741100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.750984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.751090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.751108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.751115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.751122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.751137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.761026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.761134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.761155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.761163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.761169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.761184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.771030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.771134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.771152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.771160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.771166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.771181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.311 [2024-07-26 13:44:22.780964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.311 [2024-07-26 13:44:22.781070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.311 [2024-07-26 13:44:22.781087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.311 [2024-07-26 13:44:22.781095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.311 [2024-07-26 13:44:22.781101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.311 [2024-07-26 13:44:22.781116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.311 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.790993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.791094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.791112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.791119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.791125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.791142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.801114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.801231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.801249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.801257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.801263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.801283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.811150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.811260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.811277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.811285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.811291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.811307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.821216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.821319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.821336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.821344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.821350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.821366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.831216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.831316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.831334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.831341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.831348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.831363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.841225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.841379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.841396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.841403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.841409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.841425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.851424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.851529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.851550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.851557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.851564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.851579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.861322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.861563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.861582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.574 [2024-07-26 13:44:22.861589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.574 [2024-07-26 13:44:22.861595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.574 [2024-07-26 13:44:22.861610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.574 qpair failed and we were unable to recover it. 00:33:25.574 [2024-07-26 13:44:22.871345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.574 [2024-07-26 13:44:22.871450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.574 [2024-07-26 13:44:22.871468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.871475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.871481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.871496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.881322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.881427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.881444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.881451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.881458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.881473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.891316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.891420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.891438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.891445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.891451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.891473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.901378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.901485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.901502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.901509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.901515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.901530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.911437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.911582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.911599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.911607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.911613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.911627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.921350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.921459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.921477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.921484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.921491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.921511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.931376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.931478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.931495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.931503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.931509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.931524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.941532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.941637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.941658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.941665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.941672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.941688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.951561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.951664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.951681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.951688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.951695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.951711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.961916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.962156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.962182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.962191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.962198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.962225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.971610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.971715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.971734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.971741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.971748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.971765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.981620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.575 [2024-07-26 13:44:22.981730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.575 [2024-07-26 13:44:22.981756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.575 [2024-07-26 13:44:22.981764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.575 [2024-07-26 13:44:22.981772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.575 [2024-07-26 13:44:22.981797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.575 qpair failed and we were unable to recover it. 00:33:25.575 [2024-07-26 13:44:22.991532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:22.991651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:22.991670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:22.991677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:22.991683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:22.991700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.576 [2024-07-26 13:44:23.001841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:23.001984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:23.002002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:23.002009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:23.002015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:23.002031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.576 [2024-07-26 13:44:23.011690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:23.011829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:23.011855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:23.011864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:23.011871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:23.011891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.576 [2024-07-26 13:44:23.021718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:23.021825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:23.021851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:23.021860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:23.021866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:23.021887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.576 [2024-07-26 13:44:23.031751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:23.031864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:23.031894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:23.031904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:23.031911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:23.031931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.576 [2024-07-26 13:44:23.041837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.576 [2024-07-26 13:44:23.041996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.576 [2024-07-26 13:44:23.042022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.576 [2024-07-26 13:44:23.042030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.576 [2024-07-26 13:44:23.042037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.576 [2024-07-26 13:44:23.042058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.576 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.051785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.051893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.051920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.051929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.051935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.051955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.061860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.061999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.062025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.062035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.062041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.062061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.072026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.072139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.072158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.072165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.072172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.072193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.081888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.082003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.082020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.082027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.082034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.082050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.091916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.092014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.092031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.092038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.092044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.092060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.101930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.102035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.102053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.102060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.102067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.102083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.111998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.112098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.112116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.112124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.112130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.112146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.839 [2024-07-26 13:44:23.121992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.839 [2024-07-26 13:44:23.122103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.839 [2024-07-26 13:44:23.122125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.839 [2024-07-26 13:44:23.122132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.839 [2024-07-26 13:44:23.122139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.839 [2024-07-26 13:44:23.122155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.839 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.131931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.132048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.132066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.132073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.132079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.132094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.142062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.142166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.142183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.142190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.142196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.142220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.152103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.152211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.152228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.152236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.152243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.152259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.162103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.162217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.162234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.162241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.162252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.162268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.172141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.172284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.172302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.172309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.172315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.172331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.182177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.182285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.182303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.182310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.182317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.182333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.192186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.192289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.192306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.192314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.192321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.192336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.202105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.202213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.202230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.202238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.202244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.202260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.212230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.212332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.212349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.212356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.212363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.212379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.222267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.222370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.222387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.222394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.222401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.222416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.232192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.232294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.232312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.232319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.232325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.232341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.242357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.840 [2024-07-26 13:44:23.242482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.840 [2024-07-26 13:44:23.242499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.840 [2024-07-26 13:44:23.242507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.840 [2024-07-26 13:44:23.242513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.840 [2024-07-26 13:44:23.242528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.840 qpair failed and we were unable to recover it. 00:33:25.840 [2024-07-26 13:44:23.252363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.252462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.252479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.252486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.252497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.252513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:25.841 [2024-07-26 13:44:23.262384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.262481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.262497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.262504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.262511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.262526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:25.841 [2024-07-26 13:44:23.272478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.272602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.272619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.272626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.272632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.272647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:25.841 [2024-07-26 13:44:23.282449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.282571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.282587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.282595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.282602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.282617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:25.841 [2024-07-26 13:44:23.292466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.292565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.292582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.292589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.292596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.292612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:25.841 [2024-07-26 13:44:23.302485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:25.841 [2024-07-26 13:44:23.302589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:25.841 [2024-07-26 13:44:23.302606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:25.841 [2024-07-26 13:44:23.302613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:25.841 [2024-07-26 13:44:23.302619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:25.841 [2024-07-26 13:44:23.302635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:25.841 qpair failed and we were unable to recover it. 00:33:26.104 [2024-07-26 13:44:23.312418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.104 [2024-07-26 13:44:23.312518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.312536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.312544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.312550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.312565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.322482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.322589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.322605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.322612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.322619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.322635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.332602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.332699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.332716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.332723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.332730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.332745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.342629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.342731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.342747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.342755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.342766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.342781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.352663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.352766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.352785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.352792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.352799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.352814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.362678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.362787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.362813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.362822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.362828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.362849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.372587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.372693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.372720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.372729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.372735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.372756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.382717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.382832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.382858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.382867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.382875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.382895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.392766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.392872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.392892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.392900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.392906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.392922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.402708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.402815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.402832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.402839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.402846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.402862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.412784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.412888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.412905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.412913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.412919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.412934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.422860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.422961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.422979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.422986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.422992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.423008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.432848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.433003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.433030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.433039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.433049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.433069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.442823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.442932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.442951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.442958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.442964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.442981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.452896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.452997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.453015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.453022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.453028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.453044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.463108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.463212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.463230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.463246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.463253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.463269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.473011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.105 [2024-07-26 13:44:23.473093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.105 [2024-07-26 13:44:23.473109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.105 [2024-07-26 13:44:23.473117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.105 [2024-07-26 13:44:23.473123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.105 [2024-07-26 13:44:23.473138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.105 qpair failed and we were unable to recover it. 00:33:26.105 [2024-07-26 13:44:23.483053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.483194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.483220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.483227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.483234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.106 [2024-07-26 13:44:23.483249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.492919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.493017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.493034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.493042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.493048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.106 [2024-07-26 13:44:23.493063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.503078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.503182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.503205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.503212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.503219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.106 [2024-07-26 13:44:23.503234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.512985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.513087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.513104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.513112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.513118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1812010 00:33:26.106 [2024-07-26 13:44:23.513133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.513501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181fb00 is same with the state(5) to be set 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 [2024-07-26 13:44:23.514431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:26.106 [2024-07-26 13:44:23.523252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.523519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.523574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.523597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.523616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff428000b90 00:33:26.106 [2024-07-26 13:44:23.523664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.533155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.533348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.533381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.533396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.533409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff428000b90 00:33:26.106 [2024-07-26 13:44:23.533442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Read completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 Write completed with error (sct=0, sc=8) 00:33:26.106 starting I/O failed 00:33:26.106 [2024-07-26 13:44:23.533779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:26.106 [2024-07-26 13:44:23.543125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.543216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.543237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.543243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.543248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff420000b90 00:33:26.106 [2024-07-26 13:44:23.543263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.553249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.553362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.553377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.553382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.553386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff420000b90 00:33:26.106 [2024-07-26 13:44:23.553400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.563333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.563609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.563678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.106 [2024-07-26 13:44:23.563703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.106 [2024-07-26 13:44:23.563722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff418000b90 00:33:26.106 [2024-07-26 13:44:23.563785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.106 qpair failed and we were unable to recover it. 00:33:26.106 [2024-07-26 13:44:23.573325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:26.106 [2024-07-26 13:44:23.573562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:26.106 [2024-07-26 13:44:23.573604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:26.107 [2024-07-26 13:44:23.573622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:26.107 [2024-07-26 13:44:23.573640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff418000b90 00:33:26.107 [2024-07-26 13:44:23.573679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:26.107 qpair failed and we were unable to recover it. 00:33:26.107 [2024-07-26 13:44:23.574147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181fb00 (9): Bad file descriptor 00:33:26.368 Initializing NVMe Controllers 00:33:26.368 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:26.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:26.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:26.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:26.368 Initialization complete. Launching workers. 00:33:26.368 Starting thread on core 1 00:33:26.368 Starting thread on core 2 00:33:26.368 Starting thread on core 3 00:33:26.368 Starting thread on core 0 00:33:26.368 13:44:23 -- host/target_disconnect.sh@59 -- # sync 00:33:26.368 00:33:26.368 real 0m11.363s 00:33:26.368 user 0m20.286s 00:33:26.368 sys 0m4.132s 00:33:26.368 13:44:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.368 13:44:23 -- common/autotest_common.sh@10 -- # set +x 00:33:26.368 ************************************ 00:33:26.368 END TEST nvmf_target_disconnect_tc2 00:33:26.368 ************************************ 00:33:26.368 13:44:23 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:26.368 13:44:23 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:26.368 13:44:23 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:26.368 13:44:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:26.368 13:44:23 -- nvmf/common.sh@116 -- # sync 00:33:26.368 13:44:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:26.368 13:44:23 -- nvmf/common.sh@119 -- # set +e 00:33:26.368 13:44:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:26.368 13:44:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:26.368 rmmod nvme_tcp 00:33:26.368 rmmod nvme_fabrics 00:33:26.368 rmmod nvme_keyring 00:33:26.368 13:44:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:26.368 13:44:23 -- nvmf/common.sh@123 -- # set -e 00:33:26.368 13:44:23 -- nvmf/common.sh@124 -- # return 0 00:33:26.368 13:44:23 -- nvmf/common.sh@477 -- # '[' -n 1190411 ']' 00:33:26.368 13:44:23 -- nvmf/common.sh@478 -- # killprocess 1190411 00:33:26.368 13:44:23 -- common/autotest_common.sh@926 -- # '[' -z 1190411 ']' 00:33:26.368 13:44:23 -- common/autotest_common.sh@930 -- # kill -0 1190411 00:33:26.368 13:44:23 -- common/autotest_common.sh@931 -- # uname 00:33:26.368 13:44:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:26.368 13:44:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1190411 00:33:26.368 13:44:23 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:26.368 13:44:23 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:26.368 13:44:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1190411' 00:33:26.368 killing process with pid 1190411 00:33:26.368 13:44:23 -- common/autotest_common.sh@945 -- # kill 1190411 00:33:26.368 13:44:23 -- common/autotest_common.sh@950 -- # wait 1190411 00:33:26.629 13:44:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:26.629 13:44:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:26.629 13:44:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:26.629 13:44:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.629 13:44:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:26.629 13:44:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.629 13:44:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.629 13:44:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.544 13:44:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:28.544 00:33:28.544 real 0m21.154s 00:33:28.544 user 0m48.042s 00:33:28.544 sys 0m9.794s 00:33:28.544 13:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.544 13:44:25 -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 ************************************ 00:33:28.544 END TEST nvmf_target_disconnect 00:33:28.544 ************************************ 00:33:28.544 13:44:25 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:33:28.544 13:44:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:28.544 13:44:25 -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 13:44:26 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:33:28.544 00:33:28.544 real 25m57.324s 00:33:28.544 user 69m31.417s 00:33:28.544 sys 7m15.733s 00:33:28.544 13:44:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.544 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:28.544 ************************************ 00:33:28.544 END TEST nvmf_tcp 00:33:28.544 ************************************ 00:33:28.806 13:44:26 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:28.806 13:44:26 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:28.806 13:44:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:28.806 13:44:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:28.806 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:28.806 ************************************ 00:33:28.806 START TEST spdkcli_nvmf_tcp 00:33:28.806 ************************************ 00:33:28.806 13:44:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:28.806 * Looking for test storage... 00:33:28.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:28.806 13:44:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:28.806 13:44:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.806 13:44:26 -- nvmf/common.sh@7 -- # uname -s 00:33:28.806 13:44:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.806 13:44:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.806 13:44:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.806 13:44:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.806 13:44:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.806 13:44:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.806 13:44:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.806 13:44:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.806 13:44:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.806 13:44:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.806 13:44:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.806 13:44:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:28.806 13:44:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.806 13:44:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.806 13:44:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.806 13:44:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.806 13:44:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.806 13:44:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.806 13:44:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.806 13:44:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.806 13:44:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.806 13:44:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.806 13:44:26 -- paths/export.sh@5 -- # export PATH 00:33:28.806 13:44:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.806 13:44:26 -- nvmf/common.sh@46 -- # : 0 00:33:28.806 13:44:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:28.806 13:44:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:28.806 13:44:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:28.806 13:44:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.806 13:44:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.806 13:44:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:28.806 13:44:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:28.806 13:44:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:28.806 13:44:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:28.806 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:28.806 13:44:26 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:28.806 13:44:26 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1192348 00:33:28.806 13:44:26 -- spdkcli/common.sh@34 -- # waitforlisten 1192348 00:33:28.806 13:44:26 -- common/autotest_common.sh@819 -- # '[' -z 1192348 ']' 00:33:28.806 13:44:26 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:28.806 13:44:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.806 13:44:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:28.806 13:44:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.806 13:44:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:28.806 13:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:28.806 [2024-07-26 13:44:26.248133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:28.806 [2024-07-26 13:44:26.248215] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192348 ] 00:33:28.806 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.067 [2024-07-26 13:44:26.314823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:29.067 [2024-07-26 13:44:26.352833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:29.067 [2024-07-26 13:44:26.353138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.067 [2024-07-26 13:44:26.353139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.638 13:44:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:29.638 13:44:27 -- common/autotest_common.sh@852 -- # return 0 00:33:29.638 13:44:27 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:29.638 13:44:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:29.638 13:44:27 -- common/autotest_common.sh@10 -- # set +x 00:33:29.638 13:44:27 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:29.638 13:44:27 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:29.638 13:44:27 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:29.638 13:44:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:29.638 13:44:27 -- common/autotest_common.sh@10 -- # set +x 00:33:29.638 13:44:27 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:29.638 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:29.638 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:29.638 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:29.638 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:29.638 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:29.638 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:29.638 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:29.638 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:29.638 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:29.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:29.638 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:29.638 ' 00:33:30.211 [2024-07-26 13:44:27.381350] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:32.126 [2024-07-26 13:44:29.383952] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.512 [2024-07-26 13:44:30.551802] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:35.424 [2024-07-26 13:44:32.690012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:37.338 [2024-07-26 13:44:34.527607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:38.725 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:38.725 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:38.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.725 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:38.725 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:38.725 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:38.725 13:44:36 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:38.725 13:44:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:38.725 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:33:38.725 13:44:36 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:38.725 13:44:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:38.725 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:33:38.725 13:44:36 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:38.725 13:44:36 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:38.986 13:44:36 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:39.247 13:44:36 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:39.247 13:44:36 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:39.247 13:44:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:39.247 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:33:39.247 13:44:36 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:39.247 13:44:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:39.247 13:44:36 -- common/autotest_common.sh@10 -- # set +x 00:33:39.247 13:44:36 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:39.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:39.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:39.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:39.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:39.247 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:39.247 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:39.247 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:39.247 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:39.247 ' 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:44.539 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:44.539 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:44.539 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:44.539 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:44.539 13:44:41 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:44.539 13:44:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:44.539 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:33:44.539 13:44:41 -- spdkcli/nvmf.sh@90 -- # killprocess 1192348 00:33:44.539 13:44:41 -- common/autotest_common.sh@926 -- # '[' -z 1192348 ']' 00:33:44.539 13:44:41 -- common/autotest_common.sh@930 -- # kill -0 1192348 00:33:44.539 13:44:41 -- common/autotest_common.sh@931 -- # uname 00:33:44.539 13:44:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:44.539 13:44:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1192348 00:33:44.539 13:44:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:44.539 13:44:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:44.539 13:44:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1192348' 00:33:44.539 killing process with pid 1192348 00:33:44.539 13:44:41 -- common/autotest_common.sh@945 -- # kill 1192348 00:33:44.539 [2024-07-26 13:44:41.475964] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:44.539 13:44:41 -- common/autotest_common.sh@950 -- # wait 1192348 00:33:44.539 13:44:41 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:44.539 13:44:41 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:44.539 13:44:41 -- spdkcli/common.sh@13 -- # '[' -n 1192348 ']' 00:33:44.539 13:44:41 -- spdkcli/common.sh@14 -- # killprocess 1192348 00:33:44.539 13:44:41 -- common/autotest_common.sh@926 -- # '[' -z 1192348 ']' 00:33:44.539 13:44:41 -- common/autotest_common.sh@930 -- # kill -0 1192348 00:33:44.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1192348) - No such process 00:33:44.539 13:44:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1192348 is not found' 00:33:44.539 Process with pid 1192348 is not found 00:33:44.539 13:44:41 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:44.539 13:44:41 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:44.539 13:44:41 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:44.539 00:33:44.539 real 0m15.538s 00:33:44.539 user 0m31.967s 00:33:44.539 sys 0m0.730s 00:33:44.539 13:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.539 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:33:44.539 ************************************ 00:33:44.539 END TEST spdkcli_nvmf_tcp 00:33:44.539 ************************************ 00:33:44.539 13:44:41 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:44.539 13:44:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:44.539 13:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:44.539 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:33:44.539 ************************************ 00:33:44.539 START TEST nvmf_identify_passthru 00:33:44.539 ************************************ 00:33:44.539 13:44:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:44.539 * Looking for test storage... 00:33:44.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.539 13:44:41 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.539 13:44:41 -- nvmf/common.sh@7 -- # uname -s 00:33:44.539 13:44:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.539 13:44:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.539 13:44:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.539 13:44:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.539 13:44:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.539 13:44:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.539 13:44:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.539 13:44:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.539 13:44:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.539 13:44:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.539 13:44:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:44.539 13:44:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:44.539 13:44:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.539 13:44:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.539 13:44:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.539 13:44:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.539 13:44:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.539 13:44:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.539 13:44:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.540 13:44:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@5 -- # export PATH 00:33:44.540 13:44:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- nvmf/common.sh@46 -- # : 0 00:33:44.540 13:44:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:44.540 13:44:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:44.540 13:44:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:44.540 13:44:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.540 13:44:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.540 13:44:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:44.540 13:44:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:44.540 13:44:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:44.540 13:44:41 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.540 13:44:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.540 13:44:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.540 13:44:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.540 13:44:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- paths/export.sh@5 -- # export PATH 00:33:44.540 13:44:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.540 13:44:41 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:44.540 13:44:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:44.540 13:44:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.540 13:44:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:44.540 13:44:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:44.540 13:44:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:44.540 13:44:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.540 13:44:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:44.540 13:44:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.540 13:44:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:44.540 13:44:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:44.540 13:44:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:44.540 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:33:52.685 13:44:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:52.685 13:44:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:52.685 13:44:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:52.685 13:44:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:52.685 13:44:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:52.685 13:44:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:52.685 13:44:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:52.685 13:44:48 -- nvmf/common.sh@294 -- # net_devs=() 00:33:52.685 13:44:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:52.685 13:44:48 -- nvmf/common.sh@295 -- # e810=() 00:33:52.685 13:44:48 -- nvmf/common.sh@295 -- # local -ga e810 00:33:52.685 13:44:48 -- nvmf/common.sh@296 -- # x722=() 00:33:52.685 13:44:48 -- nvmf/common.sh@296 -- # local -ga x722 00:33:52.685 13:44:48 -- nvmf/common.sh@297 -- # mlx=() 00:33:52.685 13:44:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:52.685 13:44:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.685 13:44:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:52.685 13:44:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:52.685 13:44:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:52.685 13:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:52.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:52.685 13:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:52.685 13:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:52.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:52.685 13:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:52.685 13:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.685 13:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.685 13:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:52.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:52.685 13:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.685 13:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:52.685 13:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.685 13:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.685 13:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:52.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:52.685 13:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.685 13:44:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:52.685 13:44:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:52.685 13:44:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:52.685 13:44:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.685 13:44:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.685 13:44:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.685 13:44:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:52.685 13:44:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.685 13:44:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.685 13:44:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:52.685 13:44:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.685 13:44:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.685 13:44:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:52.685 13:44:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:52.685 13:44:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.685 13:44:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.685 13:44:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.685 13:44:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.685 13:44:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:52.685 13:44:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.685 13:44:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.685 13:44:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.685 13:44:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:52.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:33:52.685 00:33:52.685 --- 10.0.0.2 ping statistics --- 00:33:52.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.685 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:33:52.686 13:44:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:33:52.686 00:33:52.686 --- 10.0.0.1 ping statistics --- 00:33:52.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.686 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:33:52.686 13:44:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.686 13:44:48 -- nvmf/common.sh@410 -- # return 0 00:33:52.686 13:44:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:52.686 13:44:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.686 13:44:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:52.686 13:44:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:52.686 13:44:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.686 13:44:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:52.686 13:44:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:52.686 13:44:48 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:52.686 13:44:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:52.686 13:44:48 -- common/autotest_common.sh@10 -- # set +x 00:33:52.686 13:44:48 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:52.686 13:44:48 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:52.686 13:44:48 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:52.686 13:44:48 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:52.686 13:44:49 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:52.686 13:44:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:52.686 13:44:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:52.686 13:44:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:52.686 13:44:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:52.686 13:44:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:52.686 13:44:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:52.686 13:44:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:52.686 13:44:49 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:33:52.686 13:44:49 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:52.686 13:44:49 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:52.686 13:44:49 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:52.686 13:44:49 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:52.686 13:44:49 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:52.686 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.686 13:44:49 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:33:52.686 13:44:49 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:52.686 13:44:49 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:52.686 13:44:49 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:52.686 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.686 13:44:50 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:33:52.686 13:44:50 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:52.686 13:44:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:52.686 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:52.686 13:44:50 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:52.686 13:44:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:52.686 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:52.686 13:44:50 -- target/identify_passthru.sh@31 -- # nvmfpid=1199196 00:33:52.686 13:44:50 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:52.686 13:44:50 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:52.686 13:44:50 -- target/identify_passthru.sh@35 -- # waitforlisten 1199196 00:33:52.686 13:44:50 -- common/autotest_common.sh@819 -- # '[' -z 1199196 ']' 00:33:52.686 13:44:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.686 13:44:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:52.686 13:44:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.686 13:44:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:52.686 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:52.686 [2024-07-26 13:44:50.138675] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:52.686 [2024-07-26 13:44:50.138735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.947 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.947 [2024-07-26 13:44:50.204819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.947 [2024-07-26 13:44:50.234117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:52.947 [2024-07-26 13:44:50.234263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.947 [2024-07-26 13:44:50.234274] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.947 [2024-07-26 13:44:50.234283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.947 [2024-07-26 13:44:50.234466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.948 [2024-07-26 13:44:50.234483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.948 [2024-07-26 13:44:50.234604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.948 [2024-07-26 13:44:50.234605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.520 13:44:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:53.520 13:44:50 -- common/autotest_common.sh@852 -- # return 0 00:33:53.520 13:44:50 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:53.520 13:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.520 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:53.520 INFO: Log level set to 20 00:33:53.520 INFO: Requests: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "method": "nvmf_set_config", 00:33:53.520 "id": 1, 00:33:53.520 "params": { 00:33:53.520 "admin_cmd_passthru": { 00:33:53.520 "identify_ctrlr": true 00:33:53.520 } 00:33:53.520 } 00:33:53.520 } 00:33:53.520 00:33:53.520 INFO: response: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "id": 1, 00:33:53.520 "result": true 00:33:53.520 } 00:33:53.520 00:33:53.520 13:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.520 13:44:50 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:53.520 13:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.520 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:53.520 INFO: Setting log level to 20 00:33:53.520 INFO: Setting log level to 20 00:33:53.520 INFO: Log level set to 20 00:33:53.520 INFO: Log level set to 20 00:33:53.520 INFO: Requests: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "method": "framework_start_init", 00:33:53.520 "id": 1 00:33:53.520 } 00:33:53.520 00:33:53.520 INFO: Requests: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "method": "framework_start_init", 00:33:53.520 "id": 1 00:33:53.520 } 00:33:53.520 00:33:53.520 [2024-07-26 13:44:50.971625] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:53.520 INFO: response: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "id": 1, 00:33:53.520 "result": true 00:33:53.520 } 00:33:53.520 00:33:53.520 INFO: response: 00:33:53.520 { 00:33:53.520 "jsonrpc": "2.0", 00:33:53.520 "id": 1, 00:33:53.520 "result": true 00:33:53.520 } 00:33:53.520 00:33:53.520 13:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.520 13:44:50 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.520 13:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.520 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:53.520 INFO: Setting log level to 40 00:33:53.520 INFO: Setting log level to 40 00:33:53.520 INFO: Setting log level to 40 00:33:53.520 [2024-07-26 13:44:50.984865] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.781 13:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:53.781 13:44:50 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:53.781 13:44:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:53.781 13:44:50 -- common/autotest_common.sh@10 -- # set +x 00:33:53.781 13:44:51 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:53.781 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:53.781 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 Nvme0n1 00:33:54.042 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.042 13:44:51 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:54.042 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.042 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.042 13:44:51 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:54.042 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.042 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.042 13:44:51 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.042 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.042 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 [2024-07-26 13:44:51.364549] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.042 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.042 13:44:51 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:54.042 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.042 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.042 [2024-07-26 13:44:51.376297] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:54.042 [ 00:33:54.042 { 00:33:54.042 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:54.042 "subtype": "Discovery", 00:33:54.042 "listen_addresses": [], 00:33:54.042 "allow_any_host": true, 00:33:54.042 "hosts": [] 00:33:54.042 }, 00:33:54.042 { 00:33:54.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.042 "subtype": "NVMe", 00:33:54.042 "listen_addresses": [ 00:33:54.042 { 00:33:54.042 "transport": "TCP", 00:33:54.042 "trtype": "TCP", 00:33:54.042 "adrfam": "IPv4", 00:33:54.042 "traddr": "10.0.0.2", 00:33:54.042 "trsvcid": "4420" 00:33:54.042 } 00:33:54.042 ], 00:33:54.042 "allow_any_host": true, 00:33:54.042 "hosts": [], 00:33:54.042 "serial_number": "SPDK00000000000001", 00:33:54.042 "model_number": "SPDK bdev Controller", 00:33:54.042 "max_namespaces": 1, 00:33:54.042 "min_cntlid": 1, 00:33:54.042 "max_cntlid": 65519, 00:33:54.042 "namespaces": [ 00:33:54.042 { 00:33:54.042 "nsid": 1, 00:33:54.042 "bdev_name": "Nvme0n1", 00:33:54.042 "name": "Nvme0n1", 00:33:54.042 "nguid": "36344730526054870025384500000044", 00:33:54.042 "uuid": "36344730-5260-5487-0025-384500000044" 00:33:54.042 } 00:33:54.042 ] 00:33:54.042 } 00:33:54.042 ] 00:33:54.043 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.043 13:44:51 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:54.043 13:44:51 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:54.043 13:44:51 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:54.043 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.344 13:44:51 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:33:54.344 13:44:51 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:54.344 13:44:51 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:54.344 13:44:51 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:54.344 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.675 13:44:51 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:33:54.675 13:44:51 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:33:54.675 13:44:51 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:33:54.675 13:44:51 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.675 13:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.675 13:44:51 -- common/autotest_common.sh@10 -- # set +x 00:33:54.675 13:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.675 13:44:51 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:54.675 13:44:51 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:54.675 13:44:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:54.675 13:44:51 -- nvmf/common.sh@116 -- # sync 00:33:54.675 13:44:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:54.675 13:44:51 -- nvmf/common.sh@119 -- # set +e 00:33:54.675 13:44:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:54.675 13:44:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:54.675 rmmod nvme_tcp 00:33:54.675 rmmod nvme_fabrics 00:33:54.675 rmmod nvme_keyring 00:33:54.675 13:44:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:54.675 13:44:51 -- nvmf/common.sh@123 -- # set -e 00:33:54.675 13:44:51 -- nvmf/common.sh@124 -- # return 0 00:33:54.675 13:44:51 -- nvmf/common.sh@477 -- # '[' -n 1199196 ']' 00:33:54.675 13:44:51 -- nvmf/common.sh@478 -- # killprocess 1199196 00:33:54.675 13:44:51 -- common/autotest_common.sh@926 -- # '[' -z 1199196 ']' 00:33:54.675 13:44:51 -- common/autotest_common.sh@930 -- # kill -0 1199196 00:33:54.675 13:44:51 -- common/autotest_common.sh@931 -- # uname 00:33:54.675 13:44:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:54.675 13:44:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1199196 00:33:54.675 13:44:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:54.675 13:44:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:54.675 13:44:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1199196' 00:33:54.675 killing process with pid 1199196 00:33:54.675 13:44:51 -- common/autotest_common.sh@945 -- # kill 1199196 00:33:54.675 [2024-07-26 13:44:51.908114] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:54.675 13:44:51 -- common/autotest_common.sh@950 -- # wait 1199196 00:33:54.936 13:44:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:54.936 13:44:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:54.936 13:44:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:54.936 13:44:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:54.936 13:44:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:54.936 13:44:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.936 13:44:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:54.936 13:44:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.853 13:44:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:56.853 00:33:56.853 real 0m12.584s 00:33:56.853 user 0m9.945s 00:33:56.853 sys 0m6.100s 00:33:56.853 13:44:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.853 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:33:56.853 ************************************ 00:33:56.853 END TEST nvmf_identify_passthru 00:33:56.853 ************************************ 00:33:56.853 13:44:54 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:56.853 13:44:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:56.853 13:44:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:56.853 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:33:56.853 ************************************ 00:33:56.853 START TEST nvmf_dif 00:33:56.853 ************************************ 00:33:56.853 13:44:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:57.115 * Looking for test storage... 00:33:57.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.115 13:44:54 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.115 13:44:54 -- nvmf/common.sh@7 -- # uname -s 00:33:57.115 13:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.115 13:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.115 13:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.115 13:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.115 13:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.115 13:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.115 13:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.115 13:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.115 13:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.115 13:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.115 13:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:57.115 13:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:57.115 13:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.115 13:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.115 13:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.115 13:44:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.115 13:44:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.115 13:44:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.115 13:44:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.115 13:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.115 13:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.115 13:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.115 13:44:54 -- paths/export.sh@5 -- # export PATH 00:33:57.115 13:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.115 13:44:54 -- nvmf/common.sh@46 -- # : 0 00:33:57.115 13:44:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:57.116 13:44:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:57.116 13:44:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:57.116 13:44:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.116 13:44:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.116 13:44:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:57.116 13:44:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:57.116 13:44:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:57.116 13:44:54 -- target/dif.sh@15 -- # NULL_META=16 00:33:57.116 13:44:54 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:57.116 13:44:54 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:57.116 13:44:54 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:57.116 13:44:54 -- target/dif.sh@135 -- # nvmftestinit 00:33:57.116 13:44:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:57.116 13:44:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.116 13:44:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:57.116 13:44:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:57.116 13:44:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:57.116 13:44:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.116 13:44:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.116 13:44:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.116 13:44:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:57.116 13:44:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:57.116 13:44:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:57.116 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:34:05.258 13:45:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:05.258 13:45:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:05.258 13:45:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:05.258 13:45:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:05.258 13:45:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:05.258 13:45:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:05.258 13:45:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:05.258 13:45:01 -- nvmf/common.sh@294 -- # net_devs=() 00:34:05.258 13:45:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:05.258 13:45:01 -- nvmf/common.sh@295 -- # e810=() 00:34:05.258 13:45:01 -- nvmf/common.sh@295 -- # local -ga e810 00:34:05.258 13:45:01 -- nvmf/common.sh@296 -- # x722=() 00:34:05.258 13:45:01 -- nvmf/common.sh@296 -- # local -ga x722 00:34:05.258 13:45:01 -- nvmf/common.sh@297 -- # mlx=() 00:34:05.258 13:45:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:05.258 13:45:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:05.258 13:45:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:05.258 13:45:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:05.258 13:45:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:05.258 13:45:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:05.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:05.258 13:45:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:05.258 13:45:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:05.258 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:05.258 13:45:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:05.258 13:45:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.258 13:45:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.258 13:45:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:05.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:05.258 13:45:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.258 13:45:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:05.258 13:45:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:05.258 13:45:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:05.258 13:45:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:05.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:05.258 13:45:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:05.258 13:45:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:05.258 13:45:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:05.258 13:45:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:05.258 13:45:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:05.258 13:45:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:05.258 13:45:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:05.258 13:45:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:05.258 13:45:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:05.258 13:45:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:05.258 13:45:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:05.258 13:45:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:05.258 13:45:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:05.258 13:45:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:05.258 13:45:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:05.258 13:45:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:05.258 13:45:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:05.258 13:45:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:05.259 13:45:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:05.259 13:45:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:05.259 13:45:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:05.259 13:45:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:05.259 13:45:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:05.259 13:45:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:05.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:05.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:34:05.259 00:34:05.259 --- 10.0.0.2 ping statistics --- 00:34:05.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.259 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:34:05.259 13:45:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:05.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:05.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:34:05.259 00:34:05.259 --- 10.0.0.1 ping statistics --- 00:34:05.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:05.259 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:34:05.259 13:45:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:05.259 13:45:01 -- nvmf/common.sh@410 -- # return 0 00:34:05.259 13:45:01 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:05.259 13:45:01 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:07.806 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:07.806 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:07.806 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:08.067 13:45:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.067 13:45:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:08.067 13:45:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:08.067 13:45:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.067 13:45:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:08.067 13:45:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:08.067 13:45:05 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:08.067 13:45:05 -- target/dif.sh@137 -- # nvmfappstart 00:34:08.067 13:45:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:08.067 13:45:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:08.067 13:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:08.067 13:45:05 -- nvmf/common.sh@469 -- # nvmfpid=1205220 00:34:08.067 13:45:05 -- nvmf/common.sh@470 -- # waitforlisten 1205220 00:34:08.067 13:45:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:08.067 13:45:05 -- common/autotest_common.sh@819 -- # '[' -z 1205220 ']' 00:34:08.067 13:45:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.067 13:45:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:08.067 13:45:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.067 13:45:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:08.067 13:45:05 -- common/autotest_common.sh@10 -- # set +x 00:34:08.067 [2024-07-26 13:45:05.380704] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:08.067 [2024-07-26 13:45:05.380775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.067 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.067 [2024-07-26 13:45:05.455090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.067 [2024-07-26 13:45:05.487892] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:08.067 [2024-07-26 13:45:05.488018] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.067 [2024-07-26 13:45:05.488027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.067 [2024-07-26 13:45:05.488034] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.067 [2024-07-26 13:45:05.488052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.010 13:45:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:09.010 13:45:06 -- common/autotest_common.sh@852 -- # return 0 00:34:09.010 13:45:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:09.010 13:45:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:09.010 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.010 13:45:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.010 13:45:06 -- target/dif.sh@139 -- # create_transport 00:34:09.010 13:45:06 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:09.010 13:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.010 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.010 [2024-07-26 13:45:06.177036] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.010 13:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.010 13:45:06 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:09.010 13:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:09.010 13:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:09.010 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.010 ************************************ 00:34:09.010 START TEST fio_dif_1_default 00:34:09.010 ************************************ 00:34:09.010 13:45:06 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:09.010 13:45:06 -- target/dif.sh@86 -- # create_subsystems 0 00:34:09.010 13:45:06 -- target/dif.sh@28 -- # local sub 00:34:09.010 13:45:06 -- target/dif.sh@30 -- # for sub in "$@" 00:34:09.010 13:45:06 -- target/dif.sh@31 -- # create_subsystem 0 00:34:09.010 13:45:06 -- target/dif.sh@18 -- # local sub_id=0 00:34:09.010 13:45:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:09.010 13:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.010 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.010 bdev_null0 00:34:09.010 13:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.010 13:45:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:09.010 13:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.010 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.010 13:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.010 13:45:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:09.011 13:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.011 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.011 13:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.011 13:45:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.011 13:45:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:09.011 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:34:09.011 [2024-07-26 13:45:06.233326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.011 13:45:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:09.011 13:45:06 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:09.011 13:45:06 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:09.011 13:45:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:09.011 13:45:06 -- nvmf/common.sh@520 -- # config=() 00:34:09.011 13:45:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.011 13:45:06 -- nvmf/common.sh@520 -- # local subsystem config 00:34:09.011 13:45:06 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.011 13:45:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:09.011 13:45:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:09.011 { 00:34:09.011 "params": { 00:34:09.011 "name": "Nvme$subsystem", 00:34:09.011 "trtype": "$TEST_TRANSPORT", 00:34:09.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.011 "adrfam": "ipv4", 00:34:09.011 "trsvcid": "$NVMF_PORT", 00:34:09.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.011 "hdgst": ${hdgst:-false}, 00:34:09.011 "ddgst": ${ddgst:-false} 00:34:09.011 }, 00:34:09.011 "method": "bdev_nvme_attach_controller" 00:34:09.011 } 00:34:09.011 EOF 00:34:09.011 )") 00:34:09.011 13:45:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:09.011 13:45:06 -- target/dif.sh@82 -- # gen_fio_conf 00:34:09.011 13:45:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.011 13:45:06 -- target/dif.sh@54 -- # local file 00:34:09.011 13:45:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:09.011 13:45:06 -- target/dif.sh@56 -- # cat 00:34:09.011 13:45:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.011 13:45:06 -- common/autotest_common.sh@1320 -- # shift 00:34:09.011 13:45:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:09.011 13:45:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.011 13:45:06 -- nvmf/common.sh@542 -- # cat 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:09.011 13:45:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:09.011 13:45:06 -- target/dif.sh@72 -- # (( file <= files )) 00:34:09.011 13:45:06 -- nvmf/common.sh@544 -- # jq . 00:34:09.011 13:45:06 -- nvmf/common.sh@545 -- # IFS=, 00:34:09.011 13:45:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:09.011 "params": { 00:34:09.011 "name": "Nvme0", 00:34:09.011 "trtype": "tcp", 00:34:09.011 "traddr": "10.0.0.2", 00:34:09.011 "adrfam": "ipv4", 00:34:09.011 "trsvcid": "4420", 00:34:09.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:09.011 "hdgst": false, 00:34:09.011 "ddgst": false 00:34:09.011 }, 00:34:09.011 "method": "bdev_nvme_attach_controller" 00:34:09.011 }' 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:09.011 13:45:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:09.011 13:45:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:09.011 13:45:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:09.011 13:45:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:09.011 13:45:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:09.011 13:45:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.272 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:09.272 fio-3.35 00:34:09.272 Starting 1 thread 00:34:09.272 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.532 [2024-07-26 13:45:06.960426] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:09.532 [2024-07-26 13:45:06.960475] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:21.766 00:34:21.766 filename0: (groupid=0, jobs=1): err= 0: pid=1205736: Fri Jul 26 13:45:17 2024 00:34:21.766 read: IOPS=181, BW=726KiB/s (743kB/s)(7280KiB/10029msec) 00:34:21.766 slat (nsec): min=5375, max=55869, avg=6163.69, stdev=1823.54 00:34:21.766 clat (usec): min=1336, max=43813, avg=22024.61, stdev=20292.38 00:34:21.766 lat (usec): min=1344, max=43849, avg=22030.77, stdev=20292.37 00:34:21.766 clat percentiles (usec): 00:34:21.766 | 1.00th=[ 1450], 5.00th=[ 1500], 10.00th=[ 1647], 20.00th=[ 1696], 00:34:21.766 | 30.00th=[ 1729], 40.00th=[ 1762], 50.00th=[41681], 60.00th=[42206], 00:34:21.766 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:21.766 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:34:21.766 | 99.99th=[43779] 00:34:21.766 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=726.40, stdev=31.32, samples=20 00:34:21.766 iops : min= 176, max= 192, avg=181.60, stdev= 7.83, samples=20 00:34:21.766 lat (msec) : 2=49.45%, 4=0.44%, 50=50.11% 00:34:21.766 cpu : usr=95.81%, sys=3.98%, ctx=15, majf=0, minf=311 00:34:21.766 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.766 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.766 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:21.766 00:34:21.766 Run status group 0 (all jobs): 00:34:21.766 READ: bw=726KiB/s (743kB/s), 726KiB/s-726KiB/s (743kB/s-743kB/s), io=7280KiB (7455kB), run=10029-10029msec 00:34:21.766 13:45:17 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:21.766 13:45:17 -- target/dif.sh@43 -- # local sub 00:34:21.766 13:45:17 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.766 13:45:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:21.766 13:45:17 -- target/dif.sh@36 -- # local sub_id=0 00:34:21.766 13:45:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 00:34:21.766 real 0m11.071s 00:34:21.766 user 0m22.386s 00:34:21.766 sys 0m0.733s 00:34:21.766 13:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 ************************************ 00:34:21.766 END TEST fio_dif_1_default 00:34:21.766 ************************************ 00:34:21.766 13:45:17 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:21.766 13:45:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:21.766 13:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 ************************************ 00:34:21.766 START TEST fio_dif_1_multi_subsystems 00:34:21.766 ************************************ 00:34:21.766 13:45:17 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:21.766 13:45:17 -- target/dif.sh@92 -- # local files=1 00:34:21.766 13:45:17 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:21.766 13:45:17 -- target/dif.sh@28 -- # local sub 00:34:21.766 13:45:17 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.766 13:45:17 -- target/dif.sh@31 -- # create_subsystem 0 00:34:21.766 13:45:17 -- target/dif.sh@18 -- # local sub_id=0 00:34:21.766 13:45:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 bdev_null0 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 [2024-07-26 13:45:17.352464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.766 13:45:17 -- target/dif.sh@31 -- # create_subsystem 1 00:34:21.766 13:45:17 -- target/dif.sh@18 -- # local sub_id=1 00:34:21.766 13:45:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 bdev_null1 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.766 13:45:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.766 13:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:21.766 13:45:17 -- common/autotest_common.sh@10 -- # set +x 00:34:21.766 13:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:21.767 13:45:17 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:21.767 13:45:17 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:21.767 13:45:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:21.767 13:45:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.767 13:45:17 -- nvmf/common.sh@520 -- # config=() 00:34:21.767 13:45:17 -- nvmf/common.sh@520 -- # local subsystem config 00:34:21.767 13:45:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.767 13:45:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:21.767 13:45:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:21.767 13:45:17 -- target/dif.sh@82 -- # gen_fio_conf 00:34:21.767 13:45:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:21.767 { 00:34:21.767 "params": { 00:34:21.767 "name": "Nvme$subsystem", 00:34:21.767 "trtype": "$TEST_TRANSPORT", 00:34:21.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.767 "adrfam": "ipv4", 00:34:21.767 "trsvcid": "$NVMF_PORT", 00:34:21.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.767 "hdgst": ${hdgst:-false}, 00:34:21.767 "ddgst": ${ddgst:-false} 00:34:21.767 }, 00:34:21.767 "method": "bdev_nvme_attach_controller" 00:34:21.767 } 00:34:21.767 EOF 00:34:21.767 )") 00:34:21.767 13:45:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:21.767 13:45:17 -- target/dif.sh@54 -- # local file 00:34:21.767 13:45:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:21.767 13:45:17 -- target/dif.sh@56 -- # cat 00:34:21.767 13:45:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.767 13:45:17 -- common/autotest_common.sh@1320 -- # shift 00:34:21.767 13:45:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:21.767 13:45:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.767 13:45:17 -- nvmf/common.sh@542 -- # cat 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.767 13:45:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:21.767 13:45:17 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:21.767 13:45:17 -- target/dif.sh@73 -- # cat 00:34:21.767 13:45:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:21.767 13:45:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:21.767 { 00:34:21.767 "params": { 00:34:21.767 "name": "Nvme$subsystem", 00:34:21.767 "trtype": "$TEST_TRANSPORT", 00:34:21.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.767 "adrfam": "ipv4", 00:34:21.767 "trsvcid": "$NVMF_PORT", 00:34:21.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.767 "hdgst": ${hdgst:-false}, 00:34:21.767 "ddgst": ${ddgst:-false} 00:34:21.767 }, 00:34:21.767 "method": "bdev_nvme_attach_controller" 00:34:21.767 } 00:34:21.767 EOF 00:34:21.767 )") 00:34:21.767 13:45:17 -- target/dif.sh@72 -- # (( file++ )) 00:34:21.767 13:45:17 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.767 13:45:17 -- nvmf/common.sh@542 -- # cat 00:34:21.767 13:45:17 -- nvmf/common.sh@544 -- # jq . 00:34:21.767 13:45:17 -- nvmf/common.sh@545 -- # IFS=, 00:34:21.767 13:45:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:21.767 "params": { 00:34:21.767 "name": "Nvme0", 00:34:21.767 "trtype": "tcp", 00:34:21.767 "traddr": "10.0.0.2", 00:34:21.767 "adrfam": "ipv4", 00:34:21.767 "trsvcid": "4420", 00:34:21.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.767 "hdgst": false, 00:34:21.767 "ddgst": false 00:34:21.767 }, 00:34:21.767 "method": "bdev_nvme_attach_controller" 00:34:21.767 },{ 00:34:21.767 "params": { 00:34:21.767 "name": "Nvme1", 00:34:21.767 "trtype": "tcp", 00:34:21.767 "traddr": "10.0.0.2", 00:34:21.767 "adrfam": "ipv4", 00:34:21.767 "trsvcid": "4420", 00:34:21.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:21.767 "hdgst": false, 00:34:21.767 "ddgst": false 00:34:21.767 }, 00:34:21.767 "method": "bdev_nvme_attach_controller" 00:34:21.767 }' 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:21.767 13:45:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:21.767 13:45:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:21.767 13:45:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:21.767 13:45:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:21.767 13:45:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:21.767 13:45:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.767 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:21.767 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:21.767 fio-3.35 00:34:21.767 Starting 2 threads 00:34:21.767 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.767 [2024-07-26 13:45:18.486903] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:21.767 [2024-07-26 13:45:18.486950] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:31.851 00:34:31.851 filename0: (groupid=0, jobs=1): err= 0: pid=1208736: Fri Jul 26 13:45:28 2024 00:34:31.851 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10006msec) 00:34:31.851 slat (nsec): min=5380, max=25901, avg=6276.53, stdev=1480.85 00:34:31.851 clat (usec): min=41892, max=43715, avg=42023.97, stdev=207.87 00:34:31.851 lat (usec): min=41897, max=43741, avg=42030.25, stdev=208.42 00:34:31.851 clat percentiles (usec): 00:34:31.851 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:34:31.851 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:31.851 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:31.851 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:34:31.851 | 99.99th=[43779] 00:34:31.851 bw ( KiB/s): min= 352, max= 384, per=49.79%, avg=379.20, stdev=11.72, samples=20 00:34:31.851 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:34:31.851 lat (msec) : 50=100.00% 00:34:31.851 cpu : usr=97.50%, sys=2.30%, ctx=7, majf=0, minf=112 00:34:31.851 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.851 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.851 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:31.851 filename1: (groupid=0, jobs=1): err= 0: pid=1208737: Fri Jul 26 13:45:28 2024 00:34:31.851 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:34:31.851 slat (nsec): min=5375, max=31383, avg=6302.15, stdev=1639.75 00:34:31.851 clat (usec): min=41862, max=43699, avg=42006.61, stdev=170.09 00:34:31.851 lat (usec): min=41870, max=43731, avg=42012.92, stdev=170.66 00:34:31.851 clat percentiles (usec): 00:34:31.851 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:34:31.851 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:31.851 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:31.851 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:34:31.851 | 99.99th=[43779] 00:34:31.851 bw ( KiB/s): min= 352, max= 384, per=49.92%, avg=380.63, stdev=10.09, samples=19 00:34:31.851 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:34:31.851 lat (msec) : 50=100.00% 00:34:31.851 cpu : usr=97.66%, sys=2.14%, ctx=9, majf=0, minf=182 00:34:31.851 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.852 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:31.852 00:34:31.852 Run status group 0 (all jobs): 00:34:31.852 READ: bw=761KiB/s (779kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7616KiB (7799kB), run=10002-10006msec 00:34:31.852 13:45:28 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:31.852 13:45:28 -- target/dif.sh@43 -- # local sub 00:34:31.852 13:45:28 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.852 13:45:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.852 13:45:28 -- target/dif.sh@36 -- # local sub_id=0 00:34:31.852 13:45:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.852 13:45:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.852 13:45:28 -- target/dif.sh@36 -- # local sub_id=1 00:34:31.852 13:45:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 00:34:31.852 real 0m11.474s 00:34:31.852 user 0m35.512s 00:34:31.852 sys 0m0.769s 00:34:31.852 13:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 ************************************ 00:34:31.852 END TEST fio_dif_1_multi_subsystems 00:34:31.852 ************************************ 00:34:31.852 13:45:28 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:31.852 13:45:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:31.852 13:45:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 ************************************ 00:34:31.852 START TEST fio_dif_rand_params 00:34:31.852 ************************************ 00:34:31.852 13:45:28 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:31.852 13:45:28 -- target/dif.sh@100 -- # local NULL_DIF 00:34:31.852 13:45:28 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:31.852 13:45:28 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:31.852 13:45:28 -- target/dif.sh@103 -- # bs=128k 00:34:31.852 13:45:28 -- target/dif.sh@103 -- # numjobs=3 00:34:31.852 13:45:28 -- target/dif.sh@103 -- # iodepth=3 00:34:31.852 13:45:28 -- target/dif.sh@103 -- # runtime=5 00:34:31.852 13:45:28 -- target/dif.sh@105 -- # create_subsystems 0 00:34:31.852 13:45:28 -- target/dif.sh@28 -- # local sub 00:34:31.852 13:45:28 -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.852 13:45:28 -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.852 13:45:28 -- target/dif.sh@18 -- # local sub_id=0 00:34:31.852 13:45:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 bdev_null0 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.852 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:31.852 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:34:31.852 [2024-07-26 13:45:28.870354] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.852 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:31.852 13:45:28 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:31.852 13:45:28 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:31.852 13:45:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:31.852 13:45:28 -- nvmf/common.sh@520 -- # config=() 00:34:31.852 13:45:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.852 13:45:28 -- nvmf/common.sh@520 -- # local subsystem config 00:34:31.852 13:45:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.852 13:45:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:31.852 13:45:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:31.852 { 00:34:31.852 "params": { 00:34:31.852 "name": "Nvme$subsystem", 00:34:31.852 "trtype": "$TEST_TRANSPORT", 00:34:31.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.852 "adrfam": "ipv4", 00:34:31.852 "trsvcid": "$NVMF_PORT", 00:34:31.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.852 "hdgst": ${hdgst:-false}, 00:34:31.852 "ddgst": ${ddgst:-false} 00:34:31.852 }, 00:34:31.852 "method": "bdev_nvme_attach_controller" 00:34:31.852 } 00:34:31.852 EOF 00:34:31.852 )") 00:34:31.852 13:45:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:31.852 13:45:28 -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.852 13:45:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.852 13:45:28 -- target/dif.sh@54 -- # local file 00:34:31.852 13:45:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:31.852 13:45:28 -- target/dif.sh@56 -- # cat 00:34:31.852 13:45:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.852 13:45:28 -- common/autotest_common.sh@1320 -- # shift 00:34:31.852 13:45:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:31.852 13:45:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.852 13:45:28 -- nvmf/common.sh@542 -- # cat 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.852 13:45:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:31.852 13:45:28 -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.852 13:45:28 -- nvmf/common.sh@544 -- # jq . 00:34:31.852 13:45:28 -- nvmf/common.sh@545 -- # IFS=, 00:34:31.852 13:45:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:31.852 "params": { 00:34:31.852 "name": "Nvme0", 00:34:31.852 "trtype": "tcp", 00:34:31.852 "traddr": "10.0.0.2", 00:34:31.852 "adrfam": "ipv4", 00:34:31.852 "trsvcid": "4420", 00:34:31.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.852 "hdgst": false, 00:34:31.852 "ddgst": false 00:34:31.852 }, 00:34:31.852 "method": "bdev_nvme_attach_controller" 00:34:31.852 }' 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.852 13:45:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.852 13:45:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:31.852 13:45:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:31.852 13:45:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:31.852 13:45:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.852 13:45:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.852 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:31.852 ... 00:34:31.852 fio-3.35 00:34:31.852 Starting 3 threads 00:34:32.127 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.387 [2024-07-26 13:45:29.697627] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:32.387 [2024-07-26 13:45:29.697679] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:37.674 00:34:37.674 filename0: (groupid=0, jobs=1): err= 0: pid=1210954: Fri Jul 26 13:45:34 2024 00:34:37.674 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(67.5MiB/5005msec) 00:34:37.674 slat (nsec): min=5358, max=54829, avg=7353.62, stdev=2806.50 00:34:37.674 clat (usec): min=7973, max=96649, avg=27790.52, stdev=21216.21 00:34:37.674 lat (usec): min=7979, max=96654, avg=27797.88, stdev=21216.43 00:34:37.674 clat percentiles (usec): 00:34:37.674 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10945], 00:34:37.674 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13304], 60.00th=[14877], 00:34:37.674 | 70.00th=[52167], 80.00th=[53740], 90.00th=[55313], 95.00th=[56361], 00:34:37.674 | 99.00th=[59507], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:34:37.674 | 99.99th=[96994] 00:34:37.674 bw ( KiB/s): min=11520, max=16896, per=34.75%, avg=13747.20, stdev=1596.67, samples=10 00:34:37.674 iops : min= 90, max= 132, avg=107.40, stdev=12.47, samples=10 00:34:37.674 lat (msec) : 10=11.48%, 20=51.48%, 50=1.11%, 100=35.93% 00:34:37.674 cpu : usr=96.62%, sys=3.08%, ctx=6, majf=0, minf=126 00:34:37.674 IO depths : 1=8.7%, 2=91.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 issued rwts: total=540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:37.674 filename0: (groupid=0, jobs=1): err= 0: pid=1210955: Fri Jul 26 13:45:34 2024 00:34:37.674 read: IOPS=104, BW=13.0MiB/s (13.7MB/s)(65.6MiB/5038msec) 00:34:37.674 slat (nsec): min=5362, max=36402, avg=7352.72, stdev=1876.38 00:34:37.674 clat (usec): min=8001, max=66337, avg=28773.49, stdev=20965.87 00:34:37.674 lat (usec): min=8010, max=66373, avg=28780.84, stdev=20965.95 00:34:37.674 clat percentiles (usec): 00:34:37.674 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10945], 00:34:37.674 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13698], 60.00th=[16188], 00:34:37.674 | 70.00th=[52691], 80.00th=[54264], 90.00th=[55837], 95.00th=[56361], 00:34:37.674 | 99.00th=[57934], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:34:37.674 | 99.99th=[66323] 00:34:37.674 bw ( KiB/s): min= 9984, max=19200, per=33.79%, avg=13365.30, stdev=2783.40, samples=10 00:34:37.674 iops : min= 78, max= 150, avg=104.40, stdev=21.76, samples=10 00:34:37.674 lat (msec) : 10=13.33%, 20=46.67%, 50=0.95%, 100=39.05% 00:34:37.674 cpu : usr=96.78%, sys=2.92%, ctx=9, majf=0, minf=151 00:34:37.674 IO depths : 1=9.9%, 2=90.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 issued rwts: total=525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:37.674 filename0: (groupid=0, jobs=1): err= 0: pid=1210956: Fri Jul 26 13:45:34 2024 00:34:37.674 read: IOPS=97, BW=12.2MiB/s (12.8MB/s)(61.5MiB/5036msec) 00:34:37.674 slat (nsec): min=5391, max=50346, avg=8522.67, stdev=2860.11 00:34:37.674 clat (usec): min=7208, max=99234, avg=30684.45, stdev=22682.84 00:34:37.674 lat (usec): min=7216, max=99243, avg=30692.98, stdev=22682.83 00:34:37.674 clat percentiles (usec): 00:34:37.674 | 1.00th=[ 7308], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10814], 00:34:37.674 | 30.00th=[11994], 40.00th=[12780], 50.00th=[14222], 60.00th=[52167], 00:34:37.674 | 70.00th=[53216], 80.00th=[54789], 90.00th=[55837], 95.00th=[56886], 00:34:37.674 | 99.00th=[95945], 99.50th=[98042], 99.90th=[99091], 99.95th=[99091], 00:34:37.674 | 99.99th=[99091] 00:34:37.674 bw ( KiB/s): min= 9216, max=16896, per=31.64%, avg=12518.40, stdev=2636.92, samples=10 00:34:37.674 iops : min= 72, max= 132, avg=97.80, stdev=20.60, samples=10 00:34:37.674 lat (msec) : 10=14.84%, 20=42.28%, 100=42.89% 00:34:37.674 cpu : usr=97.02%, sys=2.68%, ctx=19, majf=0, minf=80 00:34:37.674 IO depths : 1=7.9%, 2=92.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.674 issued rwts: total=492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:37.674 00:34:37.674 Run status group 0 (all jobs): 00:34:37.674 READ: bw=38.6MiB/s (40.5MB/s), 12.2MiB/s-13.5MiB/s (12.8MB/s-14.1MB/s), io=195MiB (204MB), run=5005-5038msec 00:34:37.674 13:45:34 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:37.674 13:45:34 -- target/dif.sh@43 -- # local sub 00:34:37.674 13:45:34 -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.674 13:45:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:37.674 13:45:34 -- target/dif.sh@36 -- # local sub_id=0 00:34:37.674 13:45:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:37.674 13:45:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 13:45:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:37.674 13:45:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # bs=4k 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # numjobs=8 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # iodepth=16 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # runtime= 00:34:37.674 13:45:35 -- target/dif.sh@109 -- # files=2 00:34:37.674 13:45:35 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:37.674 13:45:35 -- target/dif.sh@28 -- # local sub 00:34:37.674 13:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.674 13:45:35 -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.674 13:45:35 -- target/dif.sh@18 -- # local sub_id=0 00:34:37.674 13:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:37.674 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 bdev_null0 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.674 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.674 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.674 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 [2024-07-26 13:45:35.048102] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.674 13:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.674 13:45:35 -- target/dif.sh@31 -- # create_subsystem 1 00:34:37.674 13:45:35 -- target/dif.sh@18 -- # local sub_id=1 00:34:37.674 13:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:37.674 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.674 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.674 bdev_null1 00:34:37.674 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.675 13:45:35 -- target/dif.sh@31 -- # create_subsystem 2 00:34:37.675 13:45:35 -- target/dif.sh@18 -- # local sub_id=2 00:34:37.675 13:45:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 bdev_null2 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.675 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.675 13:45:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:37.675 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:37.675 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:34:37.936 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:37.936 13:45:35 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:37.936 13:45:35 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:37.936 13:45:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:37.936 13:45:35 -- nvmf/common.sh@520 -- # config=() 00:34:37.936 13:45:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.936 13:45:35 -- nvmf/common.sh@520 -- # local subsystem config 00:34:37.936 13:45:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.936 13:45:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:37.936 { 00:34:37.936 "params": { 00:34:37.936 "name": "Nvme$subsystem", 00:34:37.936 "trtype": "$TEST_TRANSPORT", 00:34:37.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.936 "adrfam": "ipv4", 00:34:37.936 "trsvcid": "$NVMF_PORT", 00:34:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.936 "hdgst": ${hdgst:-false}, 00:34:37.936 "ddgst": ${ddgst:-false} 00:34:37.936 }, 00:34:37.936 "method": "bdev_nvme_attach_controller" 00:34:37.936 } 00:34:37.936 EOF 00:34:37.936 )") 00:34:37.936 13:45:35 -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.936 13:45:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:37.936 13:45:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.936 13:45:35 -- target/dif.sh@54 -- # local file 00:34:37.936 13:45:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:37.936 13:45:35 -- target/dif.sh@56 -- # cat 00:34:37.936 13:45:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.936 13:45:35 -- common/autotest_common.sh@1320 -- # shift 00:34:37.936 13:45:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:37.936 13:45:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # cat 00:34:37.936 13:45:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.936 13:45:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.936 13:45:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:37.936 13:45:35 -- target/dif.sh@73 -- # cat 00:34:37.936 13:45:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:37.936 { 00:34:37.936 "params": { 00:34:37.936 "name": "Nvme$subsystem", 00:34:37.936 "trtype": "$TEST_TRANSPORT", 00:34:37.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.936 "adrfam": "ipv4", 00:34:37.936 "trsvcid": "$NVMF_PORT", 00:34:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.936 "hdgst": ${hdgst:-false}, 00:34:37.936 "ddgst": ${ddgst:-false} 00:34:37.936 }, 00:34:37.936 "method": "bdev_nvme_attach_controller" 00:34:37.936 } 00:34:37.936 EOF 00:34:37.936 )") 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file++ )) 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.936 13:45:35 -- target/dif.sh@73 -- # cat 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # cat 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file++ )) 00:34:37.936 13:45:35 -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.936 13:45:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:37.936 { 00:34:37.936 "params": { 00:34:37.936 "name": "Nvme$subsystem", 00:34:37.936 "trtype": "$TEST_TRANSPORT", 00:34:37.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.936 "adrfam": "ipv4", 00:34:37.936 "trsvcid": "$NVMF_PORT", 00:34:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.936 "hdgst": ${hdgst:-false}, 00:34:37.936 "ddgst": ${ddgst:-false} 00:34:37.936 }, 00:34:37.936 "method": "bdev_nvme_attach_controller" 00:34:37.936 } 00:34:37.936 EOF 00:34:37.936 )") 00:34:37.936 13:45:35 -- nvmf/common.sh@542 -- # cat 00:34:37.936 13:45:35 -- nvmf/common.sh@544 -- # jq . 00:34:37.936 13:45:35 -- nvmf/common.sh@545 -- # IFS=, 00:34:37.936 13:45:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:37.936 "params": { 00:34:37.936 "name": "Nvme0", 00:34:37.936 "trtype": "tcp", 00:34:37.936 "traddr": "10.0.0.2", 00:34:37.936 "adrfam": "ipv4", 00:34:37.936 "trsvcid": "4420", 00:34:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.936 "hdgst": false, 00:34:37.936 "ddgst": false 00:34:37.936 }, 00:34:37.936 "method": "bdev_nvme_attach_controller" 00:34:37.936 },{ 00:34:37.936 "params": { 00:34:37.936 "name": "Nvme1", 00:34:37.936 "trtype": "tcp", 00:34:37.936 "traddr": "10.0.0.2", 00:34:37.936 "adrfam": "ipv4", 00:34:37.936 "trsvcid": "4420", 00:34:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:37.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:37.936 "hdgst": false, 00:34:37.936 "ddgst": false 00:34:37.937 }, 00:34:37.937 "method": "bdev_nvme_attach_controller" 00:34:37.937 },{ 00:34:37.937 "params": { 00:34:37.937 "name": "Nvme2", 00:34:37.937 "trtype": "tcp", 00:34:37.937 "traddr": "10.0.0.2", 00:34:37.937 "adrfam": "ipv4", 00:34:37.937 "trsvcid": "4420", 00:34:37.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:37.937 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:37.937 "hdgst": false, 00:34:37.937 "ddgst": false 00:34:37.937 }, 00:34:37.937 "method": "bdev_nvme_attach_controller" 00:34:37.937 }' 00:34:37.937 13:45:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:37.937 13:45:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:37.937 13:45:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.937 13:45:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.937 13:45:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:37.937 13:45:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:37.937 13:45:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:37.937 13:45:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:37.937 13:45:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.937 13:45:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:38.197 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:38.197 ... 00:34:38.197 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:38.197 ... 00:34:38.197 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:38.197 ... 00:34:38.197 fio-3.35 00:34:38.197 Starting 24 threads 00:34:38.197 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.139 [2024-07-26 13:45:36.552839] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:39.139 [2024-07-26 13:45:36.552887] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:51.384 00:34:51.384 filename0: (groupid=0, jobs=1): err= 0: pid=1212477: Fri Jul 26 13:45:46 2024 00:34:51.384 read: IOPS=548, BW=2195KiB/s (2248kB/s)(21.5MiB/10025msec) 00:34:51.384 slat (nsec): min=5389, max=96595, avg=11095.45, stdev=8599.10 00:34:51.384 clat (usec): min=1793, max=58607, avg=29054.60, stdev=5152.37 00:34:51.384 lat (usec): min=1805, max=58613, avg=29065.70, stdev=5152.31 00:34:51.384 clat percentiles (usec): 00:34:51.384 | 1.00th=[ 3163], 5.00th=[20841], 10.00th=[27395], 20.00th=[28443], 00:34:51.384 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:34:51.384 | 70.00th=[30016], 80.00th=[30540], 90.00th=[31065], 95.00th=[35914], 00:34:51.384 | 99.00th=[42730], 99.50th=[43779], 99.90th=[55837], 99.95th=[58459], 00:34:51.384 | 99.99th=[58459] 00:34:51.384 bw ( KiB/s): min= 2024, max= 2976, per=4.39%, avg=2203.37, stdev=213.80, samples=19 00:34:51.384 iops : min= 506, max= 744, avg=550.84, stdev=53.45, samples=19 00:34:51.384 lat (msec) : 2=0.11%, 4=1.29%, 10=0.64%, 20=2.34%, 50=95.37% 00:34:51.384 lat (msec) : 100=0.25% 00:34:51.384 cpu : usr=99.20%, sys=0.46%, ctx=61, majf=0, minf=57 00:34:51.384 IO depths : 1=4.7%, 2=9.5%, 4=20.7%, 8=56.6%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:51.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.384 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.384 issued rwts: total=5502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.384 filename0: (groupid=0, jobs=1): err= 0: pid=1212478: Fri Jul 26 13:45:46 2024 00:34:51.384 read: IOPS=507, BW=2032KiB/s (2080kB/s)(19.9MiB/10020msec) 00:34:51.384 slat (usec): min=5, max=114, avg=14.31, stdev=12.46 00:34:51.384 clat (usec): min=9914, max=71543, avg=31421.00, stdev=6112.42 00:34:51.384 lat (usec): min=9922, max=71563, avg=31435.30, stdev=6112.38 00:34:51.384 clat percentiles (usec): 00:34:51.384 | 1.00th=[16909], 5.00th=[22938], 10.00th=[27395], 20.00th=[28443], 00:34:51.384 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30278], 00:34:51.384 | 70.00th=[30802], 80.00th=[35390], 90.00th=[40633], 95.00th=[43254], 00:34:51.384 | 99.00th=[51119], 99.50th=[53216], 99.90th=[62653], 99.95th=[71828], 00:34:51.384 | 99.99th=[71828] 00:34:51.384 bw ( KiB/s): min= 1763, max= 2152, per=4.04%, avg=2029.35, stdev=103.19, samples=20 00:34:51.384 iops : min= 440, max= 538, avg=507.30, stdev=25.90, samples=20 00:34:51.384 lat (msec) : 10=0.02%, 20=2.71%, 50=96.09%, 100=1.18% 00:34:51.384 cpu : usr=98.90%, sys=0.75%, ctx=43, majf=0, minf=64 00:34:51.384 IO depths : 1=0.3%, 2=0.5%, 4=6.2%, 8=78.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:34:51.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.384 complete : 0=0.0%, 4=89.9%, 8=7.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.384 issued rwts: total=5089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.384 filename0: (groupid=0, jobs=1): err= 0: pid=1212479: Fri Jul 26 13:45:46 2024 00:34:51.384 read: IOPS=518, BW=2074KiB/s (2124kB/s)(20.3MiB/10004msec) 00:34:51.384 slat (usec): min=5, max=160, avg=16.48, stdev=15.33 00:34:51.384 clat (usec): min=14809, max=59303, avg=30764.97, stdev=5295.35 00:34:51.384 lat (usec): min=14818, max=59312, avg=30781.45, stdev=5295.47 00:34:51.384 clat percentiles (usec): 00:34:51.385 | 1.00th=[16909], 5.00th=[23462], 10.00th=[26608], 20.00th=[28181], 00:34:51.385 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30278], 00:34:51.385 | 70.00th=[30802], 80.00th=[32900], 90.00th=[38011], 95.00th=[41681], 00:34:51.385 | 99.00th=[50070], 99.50th=[50594], 99.90th=[56361], 99.95th=[59507], 00:34:51.385 | 99.99th=[59507] 00:34:51.385 bw ( KiB/s): min= 1760, max= 2296, per=4.11%, avg=2063.16, stdev=131.45, samples=19 00:34:51.385 iops : min= 440, max= 574, avg=515.79, stdev=32.86, samples=19 00:34:51.385 lat (msec) : 20=2.39%, 50=96.84%, 100=0.77% 00:34:51.385 cpu : usr=94.23%, sys=2.73%, ctx=64, majf=0, minf=42 00:34:51.385 IO depths : 1=0.3%, 2=0.8%, 4=5.7%, 8=78.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=89.7%, 8=6.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=5188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename0: (groupid=0, jobs=1): err= 0: pid=1212480: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=494, BW=1978KiB/s (2026kB/s)(19.4MiB/10018msec) 00:34:51.385 slat (usec): min=5, max=140, avg=17.68, stdev=16.49 00:34:51.385 clat (usec): min=12784, max=64026, avg=32232.31, stdev=6607.16 00:34:51.385 lat (usec): min=12794, max=64042, avg=32249.99, stdev=6606.13 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[16909], 5.00th=[24249], 10.00th=[27919], 20.00th=[28705], 00:34:51.385 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30016], 60.00th=[30802], 00:34:51.385 | 70.00th=[32900], 80.00th=[36439], 90.00th=[41157], 95.00th=[45351], 00:34:51.385 | 99.00th=[55313], 99.50th=[60031], 99.90th=[61604], 99.95th=[63701], 00:34:51.385 | 99.99th=[64226] 00:34:51.385 bw ( KiB/s): min= 1792, max= 2128, per=3.94%, avg=1978.00, stdev=94.22, samples=20 00:34:51.385 iops : min= 448, max= 532, avg=494.50, stdev=23.56, samples=20 00:34:51.385 lat (msec) : 20=2.38%, 50=95.10%, 100=2.52% 00:34:51.385 cpu : usr=96.35%, sys=1.72%, ctx=128, majf=0, minf=43 00:34:51.385 IO depths : 1=0.4%, 2=0.9%, 4=7.0%, 8=77.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=90.0%, 8=6.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=4955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename0: (groupid=0, jobs=1): err= 0: pid=1212481: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=539, BW=2157KiB/s (2208kB/s)(21.1MiB/10001msec) 00:34:51.385 slat (usec): min=5, max=114, avg=23.65, stdev=17.36 00:34:51.385 clat (usec): min=19344, max=49123, avg=29467.76, stdev=1774.49 00:34:51.385 lat (usec): min=19353, max=49145, avg=29491.41, stdev=1775.38 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[26870], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:34:51.385 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:34:51.385 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:34:51.385 | 99.00th=[32375], 99.50th=[40633], 99.90th=[49021], 99.95th=[49021], 00:34:51.385 | 99.99th=[49021] 00:34:51.385 bw ( KiB/s): min= 1923, max= 2304, per=4.28%, avg=2149.21, stdev=81.83, samples=19 00:34:51.385 iops : min= 480, max= 576, avg=537.26, stdev=20.57, samples=19 00:34:51.385 lat (msec) : 20=0.30%, 50=99.70% 00:34:51.385 cpu : usr=99.12%, sys=0.53%, ctx=77, majf=0, minf=35 00:34:51.385 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename0: (groupid=0, jobs=1): err= 0: pid=1212482: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10016msec) 00:34:51.385 slat (usec): min=5, max=106, avg=13.82, stdev=13.30 00:34:51.385 clat (usec): min=9876, max=62039, avg=29886.63, stdev=4422.08 00:34:51.385 lat (usec): min=9893, max=62055, avg=29900.45, stdev=4422.97 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[17171], 5.00th=[25297], 10.00th=[27657], 20.00th=[28443], 00:34:51.385 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29754], 60.00th=[30016], 00:34:51.385 | 70.00th=[30278], 80.00th=[30540], 90.00th=[31589], 95.00th=[36439], 00:34:51.385 | 99.00th=[48497], 99.50th=[52691], 99.90th=[62129], 99.95th=[62129], 00:34:51.385 | 99.99th=[62129] 00:34:51.385 bw ( KiB/s): min= 1792, max= 2224, per=4.22%, avg=2122.95, stdev=101.62, samples=19 00:34:51.385 iops : min= 448, max= 556, avg=530.74, stdev=25.41, samples=19 00:34:51.385 lat (msec) : 10=0.11%, 20=2.06%, 50=97.16%, 100=0.67% 00:34:51.385 cpu : usr=99.32%, sys=0.38%, ctx=15, majf=0, minf=66 00:34:51.385 IO depths : 1=2.3%, 2=7.1%, 4=21.5%, 8=58.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename0: (groupid=0, jobs=1): err= 0: pid=1212483: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=507, BW=2029KiB/s (2077kB/s)(19.8MiB/10004msec) 00:34:51.385 slat (nsec): min=5380, max=99531, avg=12775.36, stdev=11804.78 00:34:51.385 clat (usec): min=4779, max=75633, avg=31482.90, stdev=6369.00 00:34:51.385 lat (usec): min=4785, max=75655, avg=31495.68, stdev=6369.17 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[13960], 5.00th=[24249], 10.00th=[27395], 20.00th=[28443], 00:34:51.385 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30016], 60.00th=[30540], 00:34:51.385 | 70.00th=[31589], 80.00th=[34866], 90.00th=[39060], 95.00th=[43254], 00:34:51.385 | 99.00th=[52691], 99.50th=[58983], 99.90th=[60556], 99.95th=[74974], 00:34:51.385 | 99.99th=[76022] 00:34:51.385 bw ( KiB/s): min= 1760, max= 2224, per=3.99%, avg=2004.21, stdev=108.95, samples=19 00:34:51.385 iops : min= 440, max= 556, avg=501.05, stdev=27.24, samples=19 00:34:51.385 lat (msec) : 10=0.63%, 20=1.56%, 50=95.59%, 100=2.23% 00:34:51.385 cpu : usr=98.85%, sys=0.71%, ctx=18, majf=0, minf=76 00:34:51.385 IO depths : 1=0.1%, 2=0.5%, 4=5.1%, 8=78.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=89.8%, 8=7.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=5074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename0: (groupid=0, jobs=1): err= 0: pid=1212484: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10005msec) 00:34:51.385 slat (usec): min=5, max=108, avg=16.73, stdev=15.10 00:34:51.385 clat (usec): min=6488, max=58266, avg=32282.25, stdev=6181.47 00:34:51.385 lat (usec): min=6494, max=58286, avg=32298.98, stdev=6179.72 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[17695], 5.00th=[25297], 10.00th=[27919], 20.00th=[28705], 00:34:51.385 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30278], 60.00th=[30802], 00:34:51.385 | 70.00th=[33817], 80.00th=[37487], 90.00th=[41157], 95.00th=[43779], 00:34:51.385 | 99.00th=[50070], 99.50th=[53216], 99.90th=[57934], 99.95th=[58459], 00:34:51.385 | 99.99th=[58459] 00:34:51.385 bw ( KiB/s): min= 1664, max= 2160, per=3.91%, avg=1964.21, stdev=107.97, samples=19 00:34:51.385 iops : min= 416, max= 540, avg=491.05, stdev=26.99, samples=19 00:34:51.385 lat (msec) : 10=0.47%, 20=1.76%, 50=96.76%, 100=1.01% 00:34:51.385 cpu : usr=95.13%, sys=2.23%, ctx=219, majf=0, minf=76 00:34:51.385 IO depths : 1=0.1%, 2=0.8%, 4=10.3%, 8=74.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=91.2%, 8=5.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename1: (groupid=0, jobs=1): err= 0: pid=1212485: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10016msec) 00:34:51.385 slat (usec): min=5, max=109, avg=14.57, stdev=13.64 00:34:51.385 clat (usec): min=9859, max=54088, avg=29937.06, stdev=4450.75 00:34:51.385 lat (usec): min=9874, max=54097, avg=29951.63, stdev=4451.91 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[17957], 5.00th=[22938], 10.00th=[26870], 20.00th=[28443], 00:34:51.385 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[30016], 00:34:51.385 | 70.00th=[30278], 80.00th=[30802], 90.00th=[34866], 95.00th=[37487], 00:34:51.385 | 99.00th=[46400], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:34:51.385 | 99.99th=[54264] 00:34:51.385 bw ( KiB/s): min= 1992, max= 2304, per=4.24%, avg=2130.40, stdev=81.01, samples=20 00:34:51.385 iops : min= 498, max= 576, avg=532.60, stdev=20.25, samples=20 00:34:51.385 lat (msec) : 10=0.08%, 20=2.36%, 50=97.22%, 100=0.34% 00:34:51.385 cpu : usr=98.78%, sys=0.80%, ctx=28, majf=0, minf=41 00:34:51.385 IO depths : 1=1.4%, 2=3.2%, 4=11.2%, 8=71.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:51.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.385 issued rwts: total=5332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.385 filename1: (groupid=0, jobs=1): err= 0: pid=1212486: Fri Jul 26 13:45:46 2024 00:34:51.385 read: IOPS=512, BW=2050KiB/s (2100kB/s)(20.0MiB/10012msec) 00:34:51.385 slat (usec): min=5, max=116, avg=17.31, stdev=17.03 00:34:51.385 clat (usec): min=12415, max=58175, avg=31090.81, stdev=5124.44 00:34:51.385 lat (usec): min=12424, max=58195, avg=31108.12, stdev=5123.64 00:34:51.385 clat percentiles (usec): 00:34:51.385 | 1.00th=[19530], 5.00th=[25297], 10.00th=[27657], 20.00th=[28705], 00:34:51.385 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30016], 60.00th=[30278], 00:34:51.385 | 70.00th=[30802], 80.00th=[32375], 90.00th=[38011], 95.00th=[41681], 00:34:51.386 | 99.00th=[48497], 99.50th=[52691], 99.90th=[57410], 99.95th=[57934], 00:34:51.386 | 99.99th=[57934] 00:34:51.386 bw ( KiB/s): min= 1920, max= 2176, per=4.08%, avg=2048.00, stdev=82.32, samples=19 00:34:51.386 iops : min= 480, max= 544, avg=512.00, stdev=20.58, samples=19 00:34:51.386 lat (msec) : 20=1.25%, 50=97.95%, 100=0.80% 00:34:51.386 cpu : usr=94.98%, sys=2.33%, ctx=197, majf=0, minf=47 00:34:51.386 IO depths : 1=0.4%, 2=2.5%, 4=12.0%, 8=71.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=91.4%, 8=4.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212487: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=501, BW=2006KiB/s (2055kB/s)(19.6MiB/10016msec) 00:34:51.386 slat (usec): min=5, max=105, avg=11.80, stdev= 9.71 00:34:51.386 clat (usec): min=10579, max=59185, avg=31786.34, stdev=5981.01 00:34:51.386 lat (usec): min=10589, max=59207, avg=31798.14, stdev=5980.53 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[15139], 5.00th=[25035], 10.00th=[28181], 20.00th=[28967], 00:34:51.386 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30016], 60.00th=[30540], 00:34:51.386 | 70.00th=[31327], 80.00th=[35914], 90.00th=[40109], 95.00th=[42730], 00:34:51.386 | 99.00th=[51643], 99.50th=[55313], 99.90th=[58459], 99.95th=[58459], 00:34:51.386 | 99.99th=[58983] 00:34:51.386 bw ( KiB/s): min= 1792, max= 2176, per=4.00%, avg=2007.20, stdev=139.02, samples=20 00:34:51.386 iops : min= 448, max= 544, avg=501.80, stdev=34.76, samples=20 00:34:51.386 lat (msec) : 20=2.63%, 50=96.24%, 100=1.13% 00:34:51.386 cpu : usr=99.09%, sys=0.55%, ctx=54, majf=0, minf=65 00:34:51.386 IO depths : 1=2.2%, 2=5.2%, 4=15.4%, 8=66.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=91.8%, 8=3.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212488: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=497, BW=1991KiB/s (2039kB/s)(19.5MiB/10004msec) 00:34:51.386 slat (usec): min=5, max=115, avg=13.73, stdev=12.95 00:34:51.386 clat (usec): min=8283, max=58993, avg=32050.86, stdev=6521.29 00:34:51.386 lat (usec): min=8288, max=59043, avg=32064.59, stdev=6519.95 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[13566], 5.00th=[23987], 10.00th=[27919], 20.00th=[28967], 00:34:51.386 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30278], 60.00th=[30540], 00:34:51.386 | 70.00th=[32113], 80.00th=[36439], 90.00th=[40633], 95.00th=[44827], 00:34:51.386 | 99.00th=[53740], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:34:51.386 | 99.99th=[58983] 00:34:51.386 bw ( KiB/s): min= 1664, max= 2176, per=3.95%, avg=1986.53, stdev=143.36, samples=19 00:34:51.386 iops : min= 416, max= 544, avg=496.63, stdev=35.84, samples=19 00:34:51.386 lat (msec) : 10=0.54%, 20=2.01%, 50=95.72%, 100=1.73% 00:34:51.386 cpu : usr=98.90%, sys=0.70%, ctx=103, majf=0, minf=56 00:34:51.386 IO depths : 1=0.2%, 2=2.4%, 4=13.1%, 8=70.1%, 16=14.3%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=91.8%, 8=4.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=4980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212489: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=533, BW=2132KiB/s (2183kB/s)(20.9MiB/10018msec) 00:34:51.386 slat (usec): min=5, max=503, avg=16.97, stdev=17.21 00:34:51.386 clat (usec): min=6158, max=58423, avg=29877.12, stdev=3159.38 00:34:51.386 lat (usec): min=6167, max=58442, avg=29894.09, stdev=3159.38 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[23200], 5.00th=[27657], 10.00th=[28181], 20.00th=[28443], 00:34:51.386 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:34:51.386 | 70.00th=[30016], 80.00th=[30278], 90.00th=[31065], 95.00th=[33424], 00:34:51.386 | 99.00th=[44303], 99.50th=[47449], 99.90th=[52691], 99.95th=[58459], 00:34:51.386 | 99.99th=[58459] 00:34:51.386 bw ( KiB/s): min= 1920, max= 2294, per=4.24%, avg=2129.10, stdev=96.18, samples=20 00:34:51.386 iops : min= 480, max= 573, avg=532.25, stdev=24.00, samples=20 00:34:51.386 lat (msec) : 10=0.13%, 20=0.71%, 50=98.93%, 100=0.22% 00:34:51.386 cpu : usr=96.82%, sys=1.53%, ctx=103, majf=0, minf=42 00:34:51.386 IO depths : 1=4.9%, 2=10.7%, 4=24.2%, 8=52.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212490: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=545, BW=2180KiB/s (2233kB/s)(21.3MiB/10010msec) 00:34:51.386 slat (nsec): min=4633, max=86561, avg=11565.99, stdev=8790.85 00:34:51.386 clat (usec): min=8796, max=62322, avg=29252.05, stdev=2830.74 00:34:51.386 lat (usec): min=8802, max=62335, avg=29263.62, stdev=2831.20 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[16712], 5.00th=[27132], 10.00th=[27657], 20.00th=[28443], 00:34:51.386 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:34:51.386 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:34:51.386 | 99.00th=[31851], 99.50th=[35390], 99.90th=[62129], 99.95th=[62129], 00:34:51.386 | 99.99th=[62129] 00:34:51.386 bw ( KiB/s): min= 1923, max= 2432, per=4.33%, avg=2176.15, stdev=109.51, samples=20 00:34:51.386 iops : min= 480, max= 608, avg=544.00, stdev=27.47, samples=20 00:34:51.386 lat (msec) : 10=0.11%, 20=1.81%, 50=97.78%, 100=0.29% 00:34:51.386 cpu : usr=96.21%, sys=1.74%, ctx=137, majf=0, minf=54 00:34:51.386 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212491: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=523, BW=2093KiB/s (2143kB/s)(20.4MiB/10002msec) 00:34:51.386 slat (usec): min=5, max=100, avg=15.20, stdev=12.43 00:34:51.386 clat (usec): min=9600, max=54930, avg=30471.63, stdev=4928.76 00:34:51.386 lat (usec): min=9606, max=54937, avg=30486.83, stdev=4928.38 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[17957], 5.00th=[23200], 10.00th=[27395], 20.00th=[28705], 00:34:51.386 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:34:51.386 | 70.00th=[30540], 80.00th=[31327], 90.00th=[37487], 95.00th=[41157], 00:34:51.386 | 99.00th=[47449], 99.50th=[49021], 99.90th=[53740], 99.95th=[54264], 00:34:51.386 | 99.99th=[54789] 00:34:51.386 bw ( KiB/s): min= 1920, max= 2224, per=4.15%, avg=2084.63, stdev=96.49, samples=19 00:34:51.386 iops : min= 480, max= 556, avg=521.16, stdev=24.12, samples=19 00:34:51.386 lat (msec) : 10=0.11%, 20=2.75%, 50=96.79%, 100=0.34% 00:34:51.386 cpu : usr=99.02%, sys=0.57%, ctx=18, majf=0, minf=41 00:34:51.386 IO depths : 1=2.5%, 2=5.3%, 4=15.3%, 8=66.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename1: (groupid=0, jobs=1): err= 0: pid=1212492: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.4MiB/10017msec) 00:34:51.386 slat (usec): min=5, max=104, avg=18.77, stdev=15.96 00:34:51.386 clat (usec): min=12504, max=55498, avg=30577.86, stdev=4869.50 00:34:51.386 lat (usec): min=12510, max=55506, avg=30596.63, stdev=4868.40 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[18220], 5.00th=[23462], 10.00th=[27657], 20.00th=[28443], 00:34:51.386 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:34:51.386 | 70.00th=[30540], 80.00th=[31327], 90.00th=[36963], 95.00th=[40109], 00:34:51.386 | 99.00th=[48497], 99.50th=[50070], 99.90th=[54789], 99.95th=[55313], 00:34:51.386 | 99.99th=[55313] 00:34:51.386 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2084.00, stdev=77.50, samples=20 00:34:51.386 iops : min= 480, max= 544, avg=521.00, stdev=19.37, samples=20 00:34:51.386 lat (msec) : 20=1.99%, 50=97.35%, 100=0.65% 00:34:51.386 cpu : usr=98.79%, sys=0.74%, ctx=26, majf=0, minf=38 00:34:51.386 IO depths : 1=1.6%, 2=3.5%, 4=12.8%, 8=70.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:34:51.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.386 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.386 filename2: (groupid=0, jobs=1): err= 0: pid=1212493: Fri Jul 26 13:45:46 2024 00:34:51.386 read: IOPS=522, BW=2089KiB/s (2139kB/s)(20.4MiB/10005msec) 00:34:51.386 slat (usec): min=5, max=117, avg=14.73, stdev=13.33 00:34:51.386 clat (usec): min=9470, max=57283, avg=30550.36, stdev=5715.23 00:34:51.386 lat (usec): min=9480, max=57291, avg=30565.09, stdev=5716.33 00:34:51.386 clat percentiles (usec): 00:34:51.386 | 1.00th=[17171], 5.00th=[21103], 10.00th=[25822], 20.00th=[28181], 00:34:51.386 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:34:51.386 | 70.00th=[30540], 80.00th=[31851], 90.00th=[37487], 95.00th=[41157], 00:34:51.386 | 99.00th=[51643], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:34:51.386 | 99.99th=[57410] 00:34:51.386 bw ( KiB/s): min= 1840, max= 2256, per=4.13%, avg=2076.21, stdev=88.66, samples=19 00:34:51.386 iops : min= 460, max= 564, avg=519.05, stdev=22.16, samples=19 00:34:51.387 lat (msec) : 10=0.02%, 20=3.87%, 50=94.58%, 100=1.53% 00:34:51.387 cpu : usr=98.42%, sys=1.09%, ctx=58, majf=0, minf=40 00:34:51.387 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=75.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212494: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=534, BW=2139KiB/s (2191kB/s)(20.9MiB/10016msec) 00:34:51.387 slat (usec): min=4, max=121, avg=15.98, stdev=13.32 00:34:51.387 clat (usec): min=8230, max=56524, avg=29797.44, stdev=5087.76 00:34:51.387 lat (usec): min=8240, max=56540, avg=29813.42, stdev=5087.96 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[16188], 5.00th=[20579], 10.00th=[25560], 20.00th=[28181], 00:34:51.387 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29492], 60.00th=[30016], 00:34:51.387 | 70.00th=[30278], 80.00th=[30802], 90.00th=[34866], 95.00th=[38536], 00:34:51.387 | 99.00th=[47973], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:34:51.387 | 99.99th=[56361] 00:34:51.387 bw ( KiB/s): min= 1923, max= 2304, per=4.25%, avg=2136.55, stdev=97.51, samples=20 00:34:51.387 iops : min= 480, max= 576, avg=534.10, stdev=24.46, samples=20 00:34:51.387 lat (msec) : 10=0.15%, 20=3.79%, 50=95.31%, 100=0.75% 00:34:51.387 cpu : usr=98.89%, sys=0.73%, ctx=27, majf=0, minf=30 00:34:51.387 IO depths : 1=2.5%, 2=5.4%, 4=15.6%, 8=65.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=91.9%, 8=3.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212495: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=585, BW=2342KiB/s (2398kB/s)(22.9MiB/10004msec) 00:34:51.387 slat (nsec): min=5386, max=92675, avg=8849.24, stdev=7141.51 00:34:51.387 clat (usec): min=2373, max=46027, avg=27259.27, stdev=5451.92 00:34:51.387 lat (usec): min=2385, max=46037, avg=27268.12, stdev=5452.09 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[ 3458], 5.00th=[17171], 10.00th=[19792], 20.00th=[24773], 00:34:51.387 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[29492], 00:34:51.387 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30540], 95.00th=[31065], 00:34:51.387 | 99.00th=[36439], 99.50th=[37487], 99.90th=[45876], 99.95th=[45876], 00:34:51.387 | 99.99th=[45876] 00:34:51.387 bw ( KiB/s): min= 2048, max= 3504, per=4.67%, avg=2344.84, stdev=405.99, samples=19 00:34:51.387 iops : min= 512, max= 876, avg=586.21, stdev=101.50, samples=19 00:34:51.387 lat (msec) : 4=1.33%, 10=0.92%, 20=8.55%, 50=89.19% 00:34:51.387 cpu : usr=99.06%, sys=0.65%, ctx=15, majf=0, minf=64 00:34:51.387 IO depths : 1=4.8%, 2=9.7%, 4=20.9%, 8=56.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212496: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=530, BW=2121KiB/s (2171kB/s)(20.7MiB/10005msec) 00:34:51.387 slat (nsec): min=5529, max=90163, avg=15618.80, stdev=12500.62 00:34:51.387 clat (usec): min=5127, max=57169, avg=30069.12, stdev=5056.98 00:34:51.387 lat (usec): min=5133, max=57177, avg=30084.74, stdev=5057.56 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[16188], 5.00th=[23200], 10.00th=[27657], 20.00th=[28443], 00:34:51.387 | 30.00th=[28967], 40.00th=[29230], 50.00th=[29492], 60.00th=[30016], 00:34:51.387 | 70.00th=[30278], 80.00th=[30802], 90.00th=[33817], 95.00th=[39060], 00:34:51.387 | 99.00th=[49546], 99.50th=[50070], 99.90th=[56886], 99.95th=[57410], 00:34:51.387 | 99.99th=[57410] 00:34:51.387 bw ( KiB/s): min= 1904, max= 2208, per=4.18%, avg=2098.53, stdev=76.18, samples=19 00:34:51.387 iops : min= 476, max= 552, avg=524.63, stdev=19.04, samples=19 00:34:51.387 lat (msec) : 10=0.68%, 20=2.56%, 50=96.00%, 100=0.75% 00:34:51.387 cpu : usr=99.22%, sys=0.42%, ctx=73, majf=0, minf=54 00:34:51.387 IO depths : 1=0.5%, 2=4.5%, 4=19.5%, 8=62.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=93.2%, 8=1.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212497: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=512, BW=2049KiB/s (2098kB/s)(20.0MiB/10006msec) 00:34:51.387 slat (nsec): min=5545, max=81180, avg=12930.18, stdev=10629.19 00:34:51.387 clat (usec): min=9730, max=54427, avg=31142.66, stdev=5799.34 00:34:51.387 lat (usec): min=9736, max=54434, avg=31155.59, stdev=5798.97 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[16188], 5.00th=[23462], 10.00th=[27395], 20.00th=[28705], 00:34:51.387 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30016], 60.00th=[30540], 00:34:51.387 | 70.00th=[30802], 80.00th=[33817], 90.00th=[39584], 95.00th=[43254], 00:34:51.387 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:34:51.387 | 99.99th=[54264] 00:34:51.387 bw ( KiB/s): min= 1920, max= 2176, per=4.09%, avg=2054.74, stdev=79.52, samples=19 00:34:51.387 iops : min= 480, max= 544, avg=513.68, stdev=19.88, samples=19 00:34:51.387 lat (msec) : 10=0.12%, 20=3.02%, 50=95.61%, 100=1.25% 00:34:51.387 cpu : usr=98.94%, sys=0.69%, ctx=18, majf=0, minf=52 00:34:51.387 IO depths : 1=1.8%, 2=4.0%, 4=13.4%, 8=69.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=91.5%, 8=3.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212498: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=507, BW=2030KiB/s (2079kB/s)(19.9MiB/10016msec) 00:34:51.387 slat (usec): min=5, max=121, avg=15.29, stdev=13.17 00:34:51.387 clat (usec): min=10996, max=57566, avg=31386.94, stdev=5787.16 00:34:51.387 lat (usec): min=11006, max=57572, avg=31402.23, stdev=5786.85 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[18744], 5.00th=[22676], 10.00th=[26870], 20.00th=[28443], 00:34:51.387 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30016], 60.00th=[30278], 00:34:51.387 | 70.00th=[31065], 80.00th=[35390], 90.00th=[39584], 95.00th=[43254], 00:34:51.387 | 99.00th=[50070], 99.50th=[50594], 99.90th=[55313], 99.95th=[55313], 00:34:51.387 | 99.99th=[57410] 00:34:51.387 bw ( KiB/s): min= 1872, max= 2176, per=4.05%, avg=2032.40, stdev=80.40, samples=20 00:34:51.387 iops : min= 468, max= 544, avg=508.10, stdev=20.10, samples=20 00:34:51.387 lat (msec) : 20=2.01%, 50=97.03%, 100=0.96% 00:34:51.387 cpu : usr=98.82%, sys=0.75%, ctx=150, majf=0, minf=43 00:34:51.387 IO depths : 1=1.4%, 2=2.9%, 4=11.3%, 8=71.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212499: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10013msec) 00:34:51.387 slat (usec): min=5, max=101, avg=15.27, stdev=13.52 00:34:51.387 clat (usec): min=13231, max=63610, avg=30186.11, stdev=4849.91 00:34:51.387 lat (usec): min=13238, max=63631, avg=30201.39, stdev=4850.23 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[17171], 5.00th=[22676], 10.00th=[27132], 20.00th=[28443], 00:34:51.387 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:34:51.387 | 70.00th=[30278], 80.00th=[31065], 90.00th=[35390], 95.00th=[39584], 00:34:51.387 | 99.00th=[48497], 99.50th=[51119], 99.90th=[56886], 99.95th=[63701], 00:34:51.387 | 99.99th=[63701] 00:34:51.387 bw ( KiB/s): min= 1984, max= 2224, per=4.20%, avg=2112.00, stdev=69.60, samples=20 00:34:51.387 iops : min= 496, max= 556, avg=528.00, stdev=17.40, samples=20 00:34:51.387 lat (msec) : 20=2.59%, 50=96.88%, 100=0.53% 00:34:51.387 cpu : usr=99.15%, sys=0.53%, ctx=17, majf=0, minf=50 00:34:51.387 IO depths : 1=1.8%, 2=4.0%, 4=12.5%, 8=69.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=91.1%, 8=4.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.387 filename2: (groupid=0, jobs=1): err= 0: pid=1212500: Fri Jul 26 13:45:46 2024 00:34:51.387 read: IOPS=543, BW=2175KiB/s (2227kB/s)(21.2MiB/10004msec) 00:34:51.387 slat (usec): min=5, max=117, avg=19.94, stdev=15.98 00:34:51.387 clat (usec): min=6703, max=36993, avg=29239.99, stdev=2058.00 00:34:51.387 lat (usec): min=6711, max=37013, avg=29259.93, stdev=2059.31 00:34:51.387 clat percentiles (usec): 00:34:51.387 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:34:51.387 | 30.00th=[28705], 40.00th=[29230], 50.00th=[29492], 60.00th=[29754], 00:34:51.387 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 00:34:51.387 | 99.00th=[31589], 99.50th=[32113], 99.90th=[36963], 99.95th=[36963], 00:34:51.387 | 99.99th=[36963] 00:34:51.387 bw ( KiB/s): min= 2048, max= 2304, per=4.29%, avg=2156.00, stdev=63.82, samples=19 00:34:51.387 iops : min= 512, max= 576, avg=539.00, stdev=15.95, samples=19 00:34:51.387 lat (msec) : 10=0.59%, 20=0.20%, 50=99.21% 00:34:51.387 cpu : usr=99.17%, sys=0.50%, ctx=62, majf=0, minf=46 00:34:51.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:51.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.387 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.388 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:51.388 00:34:51.388 Run status group 0 (all jobs): 00:34:51.388 READ: bw=49.0MiB/s (51.4MB/s), 1977KiB/s-2342KiB/s (2024kB/s-2398kB/s), io=492MiB (516MB), run=10001-10025msec 00:34:51.388 13:45:46 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:51.388 13:45:46 -- target/dif.sh@43 -- # local sub 00:34:51.388 13:45:46 -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.388 13:45:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.388 13:45:46 -- target/dif.sh@36 -- # local sub_id=0 00:34:51.388 13:45:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.388 13:45:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:51.388 13:45:46 -- target/dif.sh@36 -- # local sub_id=1 00:34:51.388 13:45:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.388 13:45:46 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:51.388 13:45:46 -- target/dif.sh@36 -- # local sub_id=2 00:34:51.388 13:45:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # numjobs=2 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # iodepth=8 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # runtime=5 00:34:51.388 13:45:46 -- target/dif.sh@115 -- # files=1 00:34:51.388 13:45:46 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:51.388 13:45:46 -- target/dif.sh@28 -- # local sub 00:34:51.388 13:45:46 -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.388 13:45:46 -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.388 13:45:46 -- target/dif.sh@18 -- # local sub_id=0 00:34:51.388 13:45:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 bdev_null0 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.388 13:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 [2024-07-26 13:45:47.001763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.388 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:47 -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.388 13:45:47 -- target/dif.sh@31 -- # create_subsystem 1 00:34:51.388 13:45:47 -- target/dif.sh@18 -- # local sub_id=1 00:34:51.388 13:45:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:51.388 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 bdev_null1 00:34:51.388 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:51.388 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:51.388 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.388 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:51.388 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:34:51.388 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:51.388 13:45:47 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:51.388 13:45:47 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:51.388 13:45:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:51.388 13:45:47 -- nvmf/common.sh@520 -- # config=() 00:34:51.388 13:45:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.388 13:45:47 -- nvmf/common.sh@520 -- # local subsystem config 00:34:51.388 13:45:47 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.388 13:45:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:51.388 13:45:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:51.388 { 00:34:51.388 "params": { 00:34:51.388 "name": "Nvme$subsystem", 00:34:51.388 "trtype": "$TEST_TRANSPORT", 00:34:51.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.388 "adrfam": "ipv4", 00:34:51.388 "trsvcid": "$NVMF_PORT", 00:34:51.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.388 "hdgst": ${hdgst:-false}, 00:34:51.388 "ddgst": ${ddgst:-false} 00:34:51.388 }, 00:34:51.388 "method": "bdev_nvme_attach_controller" 00:34:51.388 } 00:34:51.388 EOF 00:34:51.388 )") 00:34:51.388 13:45:47 -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.388 13:45:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:51.388 13:45:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.388 13:45:47 -- target/dif.sh@54 -- # local file 00:34:51.388 13:45:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:51.388 13:45:47 -- target/dif.sh@56 -- # cat 00:34:51.388 13:45:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.388 13:45:47 -- common/autotest_common.sh@1320 -- # shift 00:34:51.388 13:45:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:51.388 13:45:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.388 13:45:47 -- nvmf/common.sh@542 -- # cat 00:34:51.388 13:45:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.388 13:45:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.388 13:45:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:51.388 13:45:47 -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.388 13:45:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:51.388 13:45:47 -- target/dif.sh@73 -- # cat 00:34:51.388 13:45:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:51.388 13:45:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:51.388 { 00:34:51.388 "params": { 00:34:51.388 "name": "Nvme$subsystem", 00:34:51.388 "trtype": "$TEST_TRANSPORT", 00:34:51.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.388 "adrfam": "ipv4", 00:34:51.388 "trsvcid": "$NVMF_PORT", 00:34:51.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.388 "hdgst": ${hdgst:-false}, 00:34:51.388 "ddgst": ${ddgst:-false} 00:34:51.388 }, 00:34:51.388 "method": "bdev_nvme_attach_controller" 00:34:51.388 } 00:34:51.388 EOF 00:34:51.388 )") 00:34:51.388 13:45:47 -- target/dif.sh@72 -- # (( file++ )) 00:34:51.388 13:45:47 -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.388 13:45:47 -- nvmf/common.sh@542 -- # cat 00:34:51.388 13:45:47 -- nvmf/common.sh@544 -- # jq . 00:34:51.388 13:45:47 -- nvmf/common.sh@545 -- # IFS=, 00:34:51.388 13:45:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:51.388 "params": { 00:34:51.388 "name": "Nvme0", 00:34:51.388 "trtype": "tcp", 00:34:51.388 "traddr": "10.0.0.2", 00:34:51.388 "adrfam": "ipv4", 00:34:51.388 "trsvcid": "4420", 00:34:51.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.388 "hdgst": false, 00:34:51.388 "ddgst": false 00:34:51.388 }, 00:34:51.389 "method": "bdev_nvme_attach_controller" 00:34:51.389 },{ 00:34:51.389 "params": { 00:34:51.389 "name": "Nvme1", 00:34:51.389 "trtype": "tcp", 00:34:51.389 "traddr": "10.0.0.2", 00:34:51.389 "adrfam": "ipv4", 00:34:51.389 "trsvcid": "4420", 00:34:51.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:51.389 "hdgst": false, 00:34:51.389 "ddgst": false 00:34:51.389 }, 00:34:51.389 "method": "bdev_nvme_attach_controller" 00:34:51.389 }' 00:34:51.389 13:45:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:51.389 13:45:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:51.389 13:45:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.389 13:45:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.389 13:45:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:51.389 13:45:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:51.389 13:45:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:51.389 13:45:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:51.389 13:45:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.389 13:45:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.389 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:51.389 ... 00:34:51.389 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:51.389 ... 00:34:51.389 fio-3.35 00:34:51.389 Starting 4 threads 00:34:51.389 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.389 [2024-07-26 13:45:47.963606] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:51.389 [2024-07-26 13:45:47.963656] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:56.681 00:34:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=1214798: Fri Jul 26 13:45:53 2024 00:34:56.681 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5003msec) 00:34:56.681 slat (nsec): min=5370, max=44048, avg=5962.96, stdev=1313.30 00:34:56.681 clat (usec): min=2249, max=46971, avg=4231.63, stdev=1415.12 00:34:56.681 lat (usec): min=2255, max=47015, avg=4237.60, stdev=1415.39 00:34:56.681 clat percentiles (usec): 00:34:56.681 | 1.00th=[ 2868], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3621], 00:34:56.681 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4146], 60.00th=[ 4293], 00:34:56.681 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5407], 00:34:56.681 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7046], 99.95th=[46924], 00:34:56.681 | 99.99th=[46924] 00:34:56.681 bw ( KiB/s): min=13712, max=15616, per=21.97%, avg=15062.40, stdev=521.86, samples=10 00:34:56.681 iops : min= 1714, max= 1952, avg=1882.80, stdev=65.23, samples=10 00:34:56.681 lat (msec) : 4=41.92%, 10=57.99%, 50=0.08% 00:34:56.681 cpu : usr=96.62%, sys=2.50%, ctx=122, majf=0, minf=9 00:34:56.681 IO depths : 1=0.1%, 2=1.2%, 4=68.3%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 issued rwts: total=9422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.681 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.681 filename0: (groupid=0, jobs=1): err= 0: pid=1214800: Fri Jul 26 13:45:53 2024 00:34:56.681 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5005msec) 00:34:56.681 slat (nsec): min=5375, max=39322, avg=7251.93, stdev=2385.12 00:34:56.681 clat (usec): min=2117, max=6625, avg=3829.08, stdev=625.49 00:34:56.681 lat (usec): min=2144, max=6630, avg=3836.34, stdev=625.26 00:34:56.681 clat percentiles (usec): 00:34:56.681 | 1.00th=[ 2573], 5.00th=[ 2868], 10.00th=[ 3064], 20.00th=[ 3294], 00:34:56.681 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3949], 00:34:56.681 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4948], 00:34:56.681 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 5997], 99.95th=[ 6259], 00:34:56.681 | 99.99th=[ 6325] 00:34:56.681 bw ( KiB/s): min=16352, max=16976, per=24.26%, avg=16627.20, stdev=176.37, samples=10 00:34:56.681 iops : min= 2044, max= 2122, avg=2078.40, stdev=22.05, samples=10 00:34:56.681 lat (msec) : 4=64.16%, 10=35.84% 00:34:56.681 cpu : usr=96.24%, sys=3.36%, ctx=5, majf=0, minf=0 00:34:56.681 IO depths : 1=0.2%, 2=1.0%, 4=68.2%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 issued rwts: total=10400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.681 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.681 filename1: (groupid=0, jobs=1): err= 0: pid=1214801: Fri Jul 26 13:45:53 2024 00:34:56.681 read: IOPS=2681, BW=20.9MiB/s (22.0MB/s)(105MiB/5003msec) 00:34:56.681 slat (nsec): min=7819, max=31382, avg=8341.03, stdev=1179.02 00:34:56.681 clat (usec): min=1047, max=6521, avg=2959.30, stdev=529.89 00:34:56.681 lat (usec): min=1056, max=6551, avg=2967.64, stdev=529.95 00:34:56.681 clat percentiles (usec): 00:34:56.681 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2540], 00:34:56.681 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3032], 00:34:56.681 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3621], 95.00th=[ 3916], 00:34:56.681 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5473], 00:34:56.681 | 99.99th=[ 6456] 00:34:56.681 bw ( KiB/s): min=20544, max=21920, per=31.30%, avg=21452.80, stdev=473.12, samples=10 00:34:56.681 iops : min= 2568, max= 2740, avg=2681.60, stdev=59.14, samples=10 00:34:56.681 lat (msec) : 2=2.30%, 4=94.07%, 10=3.63% 00:34:56.681 cpu : usr=97.08%, sys=2.64%, ctx=9, majf=0, minf=0 00:34:56.681 IO depths : 1=0.8%, 2=3.6%, 4=68.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 issued rwts: total=13415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.681 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.681 filename1: (groupid=0, jobs=1): err= 0: pid=1214802: Fri Jul 26 13:45:53 2024 00:34:56.681 read: IOPS=1928, BW=15.1MiB/s (15.8MB/s)(75.4MiB/5002msec) 00:34:56.681 slat (nsec): min=5367, max=28105, avg=5951.62, stdev=1546.89 00:34:56.681 clat (usec): min=2034, max=6705, avg=4133.18, stdev=659.05 00:34:56.681 lat (usec): min=2040, max=6731, avg=4139.13, stdev=659.00 00:34:56.681 clat percentiles (usec): 00:34:56.681 | 1.00th=[ 2802], 5.00th=[ 3130], 10.00th=[ 3326], 20.00th=[ 3589], 00:34:56.681 | 30.00th=[ 3752], 40.00th=[ 3916], 50.00th=[ 4047], 60.00th=[ 4228], 00:34:56.681 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5276], 00:34:56.681 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 6390], 99.95th=[ 6390], 00:34:56.681 | 99.99th=[ 6718] 00:34:56.681 bw ( KiB/s): min=15056, max=16080, per=22.50%, avg=15422.20, stdev=305.44, samples=10 00:34:56.681 iops : min= 1882, max= 2010, avg=1927.70, stdev=38.16, samples=10 00:34:56.681 lat (msec) : 4=46.75%, 10=53.25% 00:34:56.681 cpu : usr=97.10%, sys=2.66%, ctx=5, majf=0, minf=0 00:34:56.681 IO depths : 1=0.1%, 2=1.1%, 4=68.5%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.681 issued rwts: total=9645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.681 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:56.681 00:34:56.681 Run status group 0 (all jobs): 00:34:56.681 READ: bw=66.9MiB/s (70.2MB/s), 14.7MiB/s-20.9MiB/s (15.4MB/s-22.0MB/s), io=335MiB (351MB), run=5002-5005msec 00:34:56.681 13:45:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:56.681 13:45:53 -- target/dif.sh@43 -- # local sub 00:34:56.681 13:45:53 -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.681 13:45:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:56.681 13:45:53 -- target/dif.sh@36 -- # local sub_id=0 00:34:56.681 13:45:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:56.681 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.681 13:45:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:56.681 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.681 13:45:53 -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.681 13:45:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:56.681 13:45:53 -- target/dif.sh@36 -- # local sub_id=1 00:34:56.681 13:45:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.681 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.681 13:45:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:56.681 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.681 00:34:56.681 real 0m24.422s 00:34:56.681 user 5m16.445s 00:34:56.681 sys 0m4.421s 00:34:56.681 13:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 ************************************ 00:34:56.681 END TEST fio_dif_rand_params 00:34:56.681 ************************************ 00:34:56.681 13:45:53 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:56.681 13:45:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:56.681 13:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:56.681 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.681 ************************************ 00:34:56.681 START TEST fio_dif_digest 00:34:56.681 ************************************ 00:34:56.681 13:45:53 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:56.681 13:45:53 -- target/dif.sh@123 -- # local NULL_DIF 00:34:56.681 13:45:53 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:56.681 13:45:53 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:56.681 13:45:53 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:56.681 13:45:53 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:56.681 13:45:53 -- target/dif.sh@127 -- # numjobs=3 00:34:56.681 13:45:53 -- target/dif.sh@127 -- # iodepth=3 00:34:56.681 13:45:53 -- target/dif.sh@127 -- # runtime=10 00:34:56.681 13:45:53 -- target/dif.sh@128 -- # hdgst=true 00:34:56.681 13:45:53 -- target/dif.sh@128 -- # ddgst=true 00:34:56.681 13:45:53 -- target/dif.sh@130 -- # create_subsystems 0 00:34:56.682 13:45:53 -- target/dif.sh@28 -- # local sub 00:34:56.682 13:45:53 -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.682 13:45:53 -- target/dif.sh@31 -- # create_subsystem 0 00:34:56.682 13:45:53 -- target/dif.sh@18 -- # local sub_id=0 00:34:56.682 13:45:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:56.682 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.682 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.682 bdev_null0 00:34:56.682 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.682 13:45:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:56.682 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.682 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.682 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.682 13:45:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:56.682 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.682 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.682 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.682 13:45:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:56.682 13:45:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.682 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:34:56.682 [2024-07-26 13:45:53.348761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.682 13:45:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.682 13:45:53 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:56.682 13:45:53 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:56.682 13:45:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:56.682 13:45:53 -- nvmf/common.sh@520 -- # config=() 00:34:56.682 13:45:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.682 13:45:53 -- nvmf/common.sh@520 -- # local subsystem config 00:34:56.682 13:45:53 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.682 13:45:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:56.682 13:45:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:56.682 { 00:34:56.682 "params": { 00:34:56.682 "name": "Nvme$subsystem", 00:34:56.682 "trtype": "$TEST_TRANSPORT", 00:34:56.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.682 "adrfam": "ipv4", 00:34:56.682 "trsvcid": "$NVMF_PORT", 00:34:56.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.682 "hdgst": ${hdgst:-false}, 00:34:56.682 "ddgst": ${ddgst:-false} 00:34:56.682 }, 00:34:56.682 "method": "bdev_nvme_attach_controller" 00:34:56.682 } 00:34:56.682 EOF 00:34:56.682 )") 00:34:56.682 13:45:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:56.682 13:45:53 -- target/dif.sh@82 -- # gen_fio_conf 00:34:56.682 13:45:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.682 13:45:53 -- target/dif.sh@54 -- # local file 00:34:56.682 13:45:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:56.682 13:45:53 -- target/dif.sh@56 -- # cat 00:34:56.682 13:45:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.682 13:45:53 -- common/autotest_common.sh@1320 -- # shift 00:34:56.682 13:45:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:56.682 13:45:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.682 13:45:53 -- nvmf/common.sh@542 -- # cat 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.682 13:45:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:56.682 13:45:53 -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:56.682 13:45:53 -- nvmf/common.sh@544 -- # jq . 00:34:56.682 13:45:53 -- nvmf/common.sh@545 -- # IFS=, 00:34:56.682 13:45:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:56.682 "params": { 00:34:56.682 "name": "Nvme0", 00:34:56.682 "trtype": "tcp", 00:34:56.682 "traddr": "10.0.0.2", 00:34:56.682 "adrfam": "ipv4", 00:34:56.682 "trsvcid": "4420", 00:34:56.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.682 "hdgst": true, 00:34:56.682 "ddgst": true 00:34:56.682 }, 00:34:56.682 "method": "bdev_nvme_attach_controller" 00:34:56.682 }' 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:56.682 13:45:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:56.682 13:45:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:56.682 13:45:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:56.682 13:45:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:56.682 13:45:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:56.682 13:45:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.682 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:56.682 ... 00:34:56.682 fio-3.35 00:34:56.682 Starting 3 threads 00:34:56.682 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.943 [2024-07-26 13:45:54.189690] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:56.943 [2024-07-26 13:45:54.189735] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:06.946 00:35:06.946 filename0: (groupid=0, jobs=1): err= 0: pid=1216238: Fri Jul 26 13:46:04 2024 00:35:06.946 read: IOPS=134, BW=16.8MiB/s (17.7MB/s)(169MiB/10009msec) 00:35:06.946 slat (nsec): min=5735, max=57163, avg=7199.86, stdev=2072.81 00:35:06.946 clat (usec): min=6717, max=97995, avg=22243.75, stdev=18344.97 00:35:06.946 lat (usec): min=6723, max=98002, avg=22250.95, stdev=18344.96 00:35:06.946 clat percentiles (usec): 00:35:06.946 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10945], 00:35:06.946 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13960], 60.00th=[15008], 00:35:06.946 | 70.00th=[16057], 80.00th=[52167], 90.00th=[55313], 95.00th=[56361], 00:35:06.946 | 99.00th=[58983], 99.50th=[96994], 99.90th=[96994], 99.95th=[98042], 00:35:06.946 | 99.99th=[98042] 00:35:06.946 bw ( KiB/s): min=11008, max=24320, per=33.45%, avg=17241.60, stdev=3753.49, samples=20 00:35:06.946 iops : min= 86, max= 190, avg=134.70, stdev=29.32, samples=20 00:35:06.946 lat (msec) : 10=13.27%, 20=65.09%, 50=0.07%, 100=21.57% 00:35:06.946 cpu : usr=96.91%, sys=2.82%, ctx=17, majf=0, minf=175 00:35:06.946 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 issued rwts: total=1349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.946 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.946 filename0: (groupid=0, jobs=1): err= 0: pid=1216239: Fri Jul 26 13:46:04 2024 00:35:06.946 read: IOPS=133, BW=16.7MiB/s (17.6MB/s)(168MiB/10008msec) 00:35:06.946 slat (nsec): min=5733, max=57796, avg=7767.93, stdev=2243.26 00:35:06.946 clat (usec): min=7480, max=99224, avg=22372.91, stdev=18390.53 00:35:06.946 lat (usec): min=7486, max=99232, avg=22380.68, stdev=18390.65 00:35:06.946 clat percentiles (usec): 00:35:06.946 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11600], 00:35:06.946 | 30.00th=[12387], 40.00th=[13304], 50.00th=[14222], 60.00th=[15139], 00:35:06.946 | 70.00th=[16188], 80.00th=[51643], 90.00th=[55313], 95.00th=[56361], 00:35:06.946 | 99.00th=[93848], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:35:06.946 | 99.99th=[99091] 00:35:06.946 bw ( KiB/s): min= 9472, max=24064, per=33.25%, avg=17139.20, stdev=3216.24, samples=20 00:35:06.946 iops : min= 74, max= 188, avg=133.90, stdev=25.13, samples=20 00:35:06.946 lat (msec) : 10=8.80%, 20=70.02%, 50=0.37%, 100=20.81% 00:35:06.946 cpu : usr=96.85%, sys=2.88%, ctx=18, majf=0, minf=113 00:35:06.946 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.946 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.946 filename0: (groupid=0, jobs=1): err= 0: pid=1216240: Fri Jul 26 13:46:04 2024 00:35:06.946 read: IOPS=134, BW=16.9MiB/s (17.7MB/s)(170MiB/10048msec) 00:35:06.946 slat (nsec): min=5736, max=31021, avg=6731.11, stdev=1219.05 00:35:06.946 clat (usec): min=6575, max=98801, avg=22185.99, stdev=18096.95 00:35:06.946 lat (usec): min=6581, max=98807, avg=22192.72, stdev=18096.98 00:35:06.946 clat percentiles (usec): 00:35:06.946 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[11076], 00:35:06.946 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13829], 60.00th=[14877], 00:35:06.946 | 70.00th=[16057], 80.00th=[52167], 90.00th=[55313], 95.00th=[56886], 00:35:06.946 | 99.00th=[58983], 99.50th=[61604], 99.90th=[98042], 99.95th=[99091], 00:35:06.946 | 99.99th=[99091] 00:35:06.946 bw ( KiB/s): min= 9984, max=26112, per=33.63%, avg=17331.20, stdev=4387.99, samples=20 00:35:06.946 iops : min= 78, max= 204, avg=135.40, stdev=34.28, samples=20 00:35:06.946 lat (msec) : 10=11.87%, 20=66.15%, 50=0.44%, 100=21.53% 00:35:06.946 cpu : usr=97.03%, sys=2.70%, ctx=21, majf=0, minf=182 00:35:06.946 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.946 issued rwts: total=1356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.946 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:06.946 00:35:06.946 Run status group 0 (all jobs): 00:35:06.946 READ: bw=50.3MiB/s (52.8MB/s), 16.7MiB/s-16.9MiB/s (17.6MB/s-17.7MB/s), io=506MiB (530MB), run=10008-10048msec 00:35:07.207 13:46:04 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:07.207 13:46:04 -- target/dif.sh@43 -- # local sub 00:35:07.207 13:46:04 -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.207 13:46:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:07.207 13:46:04 -- target/dif.sh@36 -- # local sub_id=0 00:35:07.207 13:46:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:07.207 13:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:07.207 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:35:07.207 13:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:07.207 13:46:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:07.207 13:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:07.207 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:35:07.207 13:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:07.207 00:35:07.208 real 0m11.202s 00:35:07.208 user 0m46.220s 00:35:07.208 sys 0m1.142s 00:35:07.208 13:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:07.208 13:46:04 -- common/autotest_common.sh@10 -- # set +x 00:35:07.208 ************************************ 00:35:07.208 END TEST fio_dif_digest 00:35:07.208 ************************************ 00:35:07.208 13:46:04 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:07.208 13:46:04 -- target/dif.sh@147 -- # nvmftestfini 00:35:07.208 13:46:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:07.208 13:46:04 -- nvmf/common.sh@116 -- # sync 00:35:07.208 13:46:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:07.208 13:46:04 -- nvmf/common.sh@119 -- # set +e 00:35:07.208 13:46:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:07.208 13:46:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:07.208 rmmod nvme_tcp 00:35:07.208 rmmod nvme_fabrics 00:35:07.208 rmmod nvme_keyring 00:35:07.208 13:46:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:07.208 13:46:04 -- nvmf/common.sh@123 -- # set -e 00:35:07.208 13:46:04 -- nvmf/common.sh@124 -- # return 0 00:35:07.208 13:46:04 -- nvmf/common.sh@477 -- # '[' -n 1205220 ']' 00:35:07.208 13:46:04 -- nvmf/common.sh@478 -- # killprocess 1205220 00:35:07.208 13:46:04 -- common/autotest_common.sh@926 -- # '[' -z 1205220 ']' 00:35:07.208 13:46:04 -- common/autotest_common.sh@930 -- # kill -0 1205220 00:35:07.208 13:46:04 -- common/autotest_common.sh@931 -- # uname 00:35:07.208 13:46:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:07.208 13:46:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1205220 00:35:07.469 13:46:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:07.469 13:46:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:07.469 13:46:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1205220' 00:35:07.469 killing process with pid 1205220 00:35:07.469 13:46:04 -- common/autotest_common.sh@945 -- # kill 1205220 00:35:07.469 13:46:04 -- common/autotest_common.sh@950 -- # wait 1205220 00:35:07.469 13:46:04 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:07.469 13:46:04 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:10.770 Waiting for block devices as requested 00:35:10.770 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:10.770 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:11.031 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:11.031 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:11.031 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:11.292 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:11.292 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:11.292 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:11.553 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:11.553 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:11.814 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:11.814 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:11.814 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:11.814 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:12.107 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:12.107 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:12.107 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:12.422 13:46:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:12.422 13:46:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:12.422 13:46:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:12.422 13:46:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:12.422 13:46:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.422 13:46:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.422 13:46:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.338 13:46:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:14.338 00:35:14.338 real 1m17.506s 00:35:14.338 user 8m2.691s 00:35:14.338 sys 0m19.686s 00:35:14.338 13:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.338 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.338 ************************************ 00:35:14.338 END TEST nvmf_dif 00:35:14.338 ************************************ 00:35:14.599 13:46:11 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:14.599 13:46:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:14.599 13:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:14.599 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.599 ************************************ 00:35:14.600 START TEST nvmf_abort_qd_sizes 00:35:14.600 ************************************ 00:35:14.600 13:46:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:14.600 * Looking for test storage... 00:35:14.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:14.600 13:46:11 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:14.600 13:46:11 -- nvmf/common.sh@7 -- # uname -s 00:35:14.600 13:46:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.600 13:46:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.600 13:46:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.600 13:46:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.600 13:46:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.600 13:46:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.600 13:46:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.600 13:46:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.600 13:46:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.600 13:46:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.600 13:46:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.600 13:46:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:14.600 13:46:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.600 13:46:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.600 13:46:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:14.600 13:46:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:14.600 13:46:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.600 13:46:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.600 13:46:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.600 13:46:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.600 13:46:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.600 13:46:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.600 13:46:11 -- paths/export.sh@5 -- # export PATH 00:35:14.600 13:46:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.600 13:46:11 -- nvmf/common.sh@46 -- # : 0 00:35:14.600 13:46:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:14.600 13:46:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:14.600 13:46:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:14.600 13:46:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.600 13:46:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.600 13:46:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:14.600 13:46:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:14.600 13:46:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:14.600 13:46:11 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:14.600 13:46:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:14.600 13:46:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:14.600 13:46:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:14.600 13:46:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:14.600 13:46:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:14.600 13:46:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.600 13:46:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.600 13:46:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.600 13:46:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:14.600 13:46:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:14.600 13:46:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:14.600 13:46:11 -- common/autotest_common.sh@10 -- # set +x 00:35:22.753 13:46:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:22.753 13:46:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:22.753 13:46:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:22.753 13:46:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:22.753 13:46:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:22.753 13:46:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:22.753 13:46:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:22.753 13:46:18 -- nvmf/common.sh@294 -- # net_devs=() 00:35:22.753 13:46:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:22.753 13:46:18 -- nvmf/common.sh@295 -- # e810=() 00:35:22.753 13:46:18 -- nvmf/common.sh@295 -- # local -ga e810 00:35:22.753 13:46:18 -- nvmf/common.sh@296 -- # x722=() 00:35:22.753 13:46:18 -- nvmf/common.sh@296 -- # local -ga x722 00:35:22.753 13:46:18 -- nvmf/common.sh@297 -- # mlx=() 00:35:22.753 13:46:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:22.753 13:46:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.753 13:46:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:22.753 13:46:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:35:22.753 13:46:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:22.753 13:46:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:22.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:22.753 13:46:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:22.753 13:46:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:22.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:22.753 13:46:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:22.753 13:46:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.753 13:46:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.753 13:46:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:22.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:22.753 13:46:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.753 13:46:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:22.753 13:46:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.753 13:46:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.753 13:46:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:22.753 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:22.753 13:46:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.753 13:46:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:22.753 13:46:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:22.753 13:46:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:22.753 13:46:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.753 13:46:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.753 13:46:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.753 13:46:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:22.753 13:46:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.753 13:46:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.753 13:46:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:22.753 13:46:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.753 13:46:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.753 13:46:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:22.753 13:46:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:22.753 13:46:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.753 13:46:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.753 13:46:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.753 13:46:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.753 13:46:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:22.753 13:46:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.753 13:46:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.753 13:46:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.753 13:46:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:22.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:35:22.753 00:35:22.753 --- 10.0.0.2 ping statistics --- 00:35:22.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.753 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:35:22.753 13:46:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:35:22.753 00:35:22.753 --- 10.0.0.1 ping statistics --- 00:35:22.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.753 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:35:22.753 13:46:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.753 13:46:19 -- nvmf/common.sh@410 -- # return 0 00:35:22.753 13:46:19 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:22.753 13:46:19 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.302 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:25.302 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:25.562 13:46:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.562 13:46:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:25.562 13:46:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:25.562 13:46:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.562 13:46:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:25.562 13:46:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:25.824 13:46:23 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:25.824 13:46:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:25.824 13:46:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:25.824 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:35:25.824 13:46:23 -- nvmf/common.sh@469 -- # nvmfpid=1225763 00:35:25.824 13:46:23 -- nvmf/common.sh@470 -- # waitforlisten 1225763 00:35:25.824 13:46:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:25.824 13:46:23 -- common/autotest_common.sh@819 -- # '[' -z 1225763 ']' 00:35:25.824 13:46:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.824 13:46:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:25.824 13:46:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.824 13:46:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:25.824 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:35:25.824 [2024-07-26 13:46:23.119604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:25.824 [2024-07-26 13:46:23.119650] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.824 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.824 [2024-07-26 13:46:23.185998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:25.824 [2024-07-26 13:46:23.216599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:25.824 [2024-07-26 13:46:23.216735] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.824 [2024-07-26 13:46:23.216745] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.824 [2024-07-26 13:46:23.216754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.824 [2024-07-26 13:46:23.216894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.824 [2024-07-26 13:46:23.217000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.824 [2024-07-26 13:46:23.217156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.824 [2024-07-26 13:46:23.217157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.767 13:46:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:26.767 13:46:23 -- common/autotest_common.sh@852 -- # return 0 00:35:26.767 13:46:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:26.767 13:46:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:26.767 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:35:26.767 13:46:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:26.767 13:46:23 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:26.767 13:46:23 -- scripts/common.sh@312 -- # local nvmes 00:35:26.767 13:46:23 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:35:26.767 13:46:23 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:26.767 13:46:23 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:26.767 13:46:23 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:35:26.767 13:46:23 -- scripts/common.sh@322 -- # uname -s 00:35:26.767 13:46:23 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:26.767 13:46:23 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:26.767 13:46:23 -- scripts/common.sh@327 -- # (( 1 )) 00:35:26.767 13:46:23 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:26.767 13:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:26.767 13:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:26.767 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:35:26.767 ************************************ 00:35:26.767 START TEST spdk_target_abort 00:35:26.767 ************************************ 00:35:26.767 13:46:23 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:26.767 13:46:23 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:35:26.767 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:26.767 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:35:26.767 spdk_targetn1 00:35:26.767 13:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:26.767 13:46:24 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:26.767 13:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:26.767 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:35:27.028 [2024-07-26 13:46:24.244169] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.028 13:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:27.028 13:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.028 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:35:27.028 13:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:27.028 13:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.028 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:35:27.028 13:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:27.028 13:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:27.028 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:35:27.028 [2024-07-26 13:46:24.284464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.028 13:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.028 13:46:24 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:27.028 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.028 [2024-07-26 13:46:24.495766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:400 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:35:27.028 [2024-07-26 13:46:24.495789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0033 p:1 m:0 dnr:0 00:35:27.289 [2024-07-26 13:46:24.505359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:656 len:8 PRP1 0x2000078be000 PRP2 0x0 00:35:27.289 [2024-07-26 13:46:24.505375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0053 p:1 m:0 dnr:0 00:35:27.289 [2024-07-26 13:46:24.565206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2176 len:8 PRP1 0x2000078be000 PRP2 0x0 00:35:27.289 [2024-07-26 13:46:24.565224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:27.289 [2024-07-26 13:46:24.613697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3312 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:35:27.289 [2024-07-26 13:46:24.613713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a1 p:0 m:0 dnr:0 00:35:27.289 [2024-07-26 13:46:24.626703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3536 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:35:27.289 [2024-07-26 13:46:24.626717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00bc p:0 m:0 dnr:0 00:35:30.590 Initializing NVMe Controllers 00:35:30.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:30.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:30.590 Initialization complete. Launching workers. 00:35:30.590 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9369, failed: 5 00:35:30.590 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2720, failed to submit 6654 00:35:30.590 success 864, unsuccess 1856, failed 0 00:35:30.590 13:46:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.590 13:46:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:30.590 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.590 [2024-07-26 13:46:27.644041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:544 len:8 PRP1 0x200007c54000 PRP2 0x0 00:35:30.590 [2024-07-26 13:46:27.644086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:35:30.590 [2024-07-26 13:46:27.658394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:888 len:8 PRP1 0x200007c54000 PRP2 0x0 00:35:30.590 [2024-07-26 13:46:27.658418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:35:30.590 [2024-07-26 13:46:27.707799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2072 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:35:30.590 [2024-07-26 13:46:27.707826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:33.892 Initializing NVMe Controllers 00:35:33.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:33.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:33.892 Initialization complete. Launching workers. 00:35:33.892 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8669, failed: 3 00:35:33.892 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1216, failed to submit 7456 00:35:33.892 success 361, unsuccess 855, failed 0 00:35:33.892 13:46:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:33.892 13:46:30 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:33.892 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.892 [2024-07-26 13:46:31.097783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:1616 len:8 PRP1 0x20000791e000 PRP2 0x0 00:35:33.892 [2024-07-26 13:46:31.097813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:33.892 [2024-07-26 13:46:31.105930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:153 nsid:1 lba:2440 len:8 PRP1 0x20000791a000 PRP2 0x0 00:35:33.892 [2024-07-26 13:46:31.105948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:153 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:35:37.194 [2024-07-26 13:46:33.935773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:302392 len:8 PRP1 0x2000078ea000 PRP2 0x0 00:35:37.194 [2024-07-26 13:46:33.935819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:0046 p:1 m:0 dnr:0 00:35:37.194 Initializing NVMe Controllers 00:35:37.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:37.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:37.194 Initialization complete. Launching workers. 00:35:37.194 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 39851, failed: 3 00:35:37.194 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2606, failed to submit 37248 00:35:37.194 success 724, unsuccess 1882, failed 0 00:35:37.194 13:46:34 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:37.194 13:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:37.194 13:46:34 -- common/autotest_common.sh@10 -- # set +x 00:35:37.194 13:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:37.194 13:46:34 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:37.194 13:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:37.194 13:46:34 -- common/autotest_common.sh@10 -- # set +x 00:35:38.578 13:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:38.578 13:46:35 -- target/abort_qd_sizes.sh@62 -- # killprocess 1225763 00:35:38.578 13:46:35 -- common/autotest_common.sh@926 -- # '[' -z 1225763 ']' 00:35:38.578 13:46:35 -- common/autotest_common.sh@930 -- # kill -0 1225763 00:35:38.578 13:46:35 -- common/autotest_common.sh@931 -- # uname 00:35:38.578 13:46:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:38.578 13:46:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1225763 00:35:38.578 13:46:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:38.578 13:46:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:38.578 13:46:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1225763' 00:35:38.578 killing process with pid 1225763 00:35:38.578 13:46:36 -- common/autotest_common.sh@945 -- # kill 1225763 00:35:38.578 13:46:36 -- common/autotest_common.sh@950 -- # wait 1225763 00:35:38.839 00:35:38.839 real 0m12.200s 00:35:38.839 user 0m49.286s 00:35:38.839 sys 0m2.045s 00:35:38.839 13:46:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.839 13:46:36 -- common/autotest_common.sh@10 -- # set +x 00:35:38.840 ************************************ 00:35:38.840 END TEST spdk_target_abort 00:35:38.840 ************************************ 00:35:38.840 13:46:36 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:38.840 13:46:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:38.840 13:46:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:38.840 13:46:36 -- common/autotest_common.sh@10 -- # set +x 00:35:38.840 ************************************ 00:35:38.840 START TEST kernel_target_abort 00:35:38.840 ************************************ 00:35:38.840 13:46:36 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:38.840 13:46:36 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:38.840 13:46:36 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:38.840 13:46:36 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:38.840 13:46:36 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:38.840 13:46:36 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:38.840 13:46:36 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:38.840 13:46:36 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:38.840 13:46:36 -- nvmf/common.sh@627 -- # local block nvme 00:35:38.840 13:46:36 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:38.840 13:46:36 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:38.840 13:46:36 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:38.840 13:46:36 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:42.242 Waiting for block devices as requested 00:35:42.242 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:42.503 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:42.503 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:42.503 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:42.763 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:42.763 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:42.763 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:42.763 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:43.024 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:43.024 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:43.285 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:43.285 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:43.285 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:43.547 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:43.547 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:43.547 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:43.547 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:44.119 13:46:41 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:44.119 13:46:41 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:44.119 13:46:41 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:44.119 13:46:41 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:44.119 13:46:41 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:44.119 No valid GPT data, bailing 00:35:44.119 13:46:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:44.119 13:46:41 -- scripts/common.sh@393 -- # pt= 00:35:44.119 13:46:41 -- scripts/common.sh@394 -- # return 1 00:35:44.119 13:46:41 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:44.119 13:46:41 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:44.119 13:46:41 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:44.119 13:46:41 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:44.119 13:46:41 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:44.119 13:46:41 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:44.119 13:46:41 -- nvmf/common.sh@654 -- # echo 1 00:35:44.119 13:46:41 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:44.119 13:46:41 -- nvmf/common.sh@656 -- # echo 1 00:35:44.119 13:46:41 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:44.119 13:46:41 -- nvmf/common.sh@663 -- # echo tcp 00:35:44.119 13:46:41 -- nvmf/common.sh@664 -- # echo 4420 00:35:44.119 13:46:41 -- nvmf/common.sh@665 -- # echo ipv4 00:35:44.119 13:46:41 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:44.119 13:46:41 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:35:44.119 00:35:44.119 Discovery Log Number of Records 2, Generation counter 2 00:35:44.119 =====Discovery Log Entry 0====== 00:35:44.119 trtype: tcp 00:35:44.119 adrfam: ipv4 00:35:44.119 subtype: current discovery subsystem 00:35:44.119 treq: not specified, sq flow control disable supported 00:35:44.119 portid: 1 00:35:44.119 trsvcid: 4420 00:35:44.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:44.119 traddr: 10.0.0.1 00:35:44.119 eflags: none 00:35:44.119 sectype: none 00:35:44.119 =====Discovery Log Entry 1====== 00:35:44.119 trtype: tcp 00:35:44.119 adrfam: ipv4 00:35:44.119 subtype: nvme subsystem 00:35:44.119 treq: not specified, sq flow control disable supported 00:35:44.119 portid: 1 00:35:44.119 trsvcid: 4420 00:35:44.119 subnqn: kernel_target 00:35:44.119 traddr: 10.0.0.1 00:35:44.119 eflags: none 00:35:44.119 sectype: none 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.119 13:46:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:44.119 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.419 Initializing NVMe Controllers 00:35:47.419 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:47.419 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:47.419 Initialization complete. Launching workers. 00:35:47.419 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 36228, failed: 0 00:35:47.419 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36228, failed to submit 0 00:35:47.419 success 0, unsuccess 36228, failed 0 00:35:47.419 13:46:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:47.419 13:46:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:47.419 EAL: No free 2048 kB hugepages reported on node 1 00:35:50.720 Initializing NVMe Controllers 00:35:50.720 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:50.720 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:50.720 Initialization complete. Launching workers. 00:35:50.720 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 74623, failed: 0 00:35:50.720 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18794, failed to submit 55829 00:35:50.720 success 0, unsuccess 18794, failed 0 00:35:50.720 13:46:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:50.720 13:46:47 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:50.720 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.266 Initializing NVMe Controllers 00:35:53.266 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:53.266 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:53.266 Initialization complete. Launching workers. 00:35:53.266 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 72589, failed: 0 00:35:53.266 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18118, failed to submit 54471 00:35:53.266 success 0, unsuccess 18118, failed 0 00:35:53.266 13:46:50 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:53.266 13:46:50 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:53.266 13:46:50 -- nvmf/common.sh@677 -- # echo 0 00:35:53.266 13:46:50 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:53.266 13:46:50 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:53.527 13:46:50 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:53.527 13:46:50 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:53.527 13:46:50 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:53.527 13:46:50 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:53.527 00:35:53.527 real 0m14.591s 00:35:53.527 user 0m5.587s 00:35:53.527 sys 0m4.212s 00:35:53.527 13:46:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.527 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:35:53.527 ************************************ 00:35:53.527 END TEST kernel_target_abort 00:35:53.527 ************************************ 00:35:53.527 13:46:50 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:53.527 13:46:50 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:53.527 13:46:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:53.527 13:46:50 -- nvmf/common.sh@116 -- # sync 00:35:53.527 13:46:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:53.527 13:46:50 -- nvmf/common.sh@119 -- # set +e 00:35:53.527 13:46:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:53.527 13:46:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:53.527 rmmod nvme_tcp 00:35:53.527 rmmod nvme_fabrics 00:35:53.527 rmmod nvme_keyring 00:35:53.527 13:46:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:53.527 13:46:50 -- nvmf/common.sh@123 -- # set -e 00:35:53.527 13:46:50 -- nvmf/common.sh@124 -- # return 0 00:35:53.527 13:46:50 -- nvmf/common.sh@477 -- # '[' -n 1225763 ']' 00:35:53.527 13:46:50 -- nvmf/common.sh@478 -- # killprocess 1225763 00:35:53.527 13:46:50 -- common/autotest_common.sh@926 -- # '[' -z 1225763 ']' 00:35:53.527 13:46:50 -- common/autotest_common.sh@930 -- # kill -0 1225763 00:35:53.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1225763) - No such process 00:35:53.527 13:46:50 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1225763 is not found' 00:35:53.527 Process with pid 1225763 is not found 00:35:53.527 13:46:50 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:53.527 13:46:50 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:56.831 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:65:00.0 (144d a80a): Already using the nvme driver 00:35:57.093 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:35:57.093 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:35:57.354 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:35:57.354 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:35:57.354 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:35:57.354 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:35:57.615 13:46:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:57.615 13:46:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:57.615 13:46:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:57.615 13:46:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:57.615 13:46:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.615 13:46:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:57.615 13:46:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.530 13:46:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:59.530 00:35:59.530 real 0m45.154s 00:35:59.530 user 1m0.180s 00:35:59.530 sys 0m16.882s 00:35:59.530 13:46:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:59.530 13:46:56 -- common/autotest_common.sh@10 -- # set +x 00:35:59.530 ************************************ 00:35:59.530 END TEST nvmf_abort_qd_sizes 00:35:59.530 ************************************ 00:35:59.791 13:46:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:59.791 13:46:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:59.791 13:46:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:59.791 13:46:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:59.791 13:46:57 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:59.791 13:46:57 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:59.791 13:46:57 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:59.791 13:46:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:59.791 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:35:59.791 13:46:57 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:59.791 13:46:57 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:59.791 13:46:57 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:59.791 13:46:57 -- common/autotest_common.sh@10 -- # set +x 00:36:07.934 INFO: APP EXITING 00:36:07.934 INFO: killing all VMs 00:36:07.934 INFO: killing vhost app 00:36:07.934 WARN: no vhost pid file found 00:36:07.934 INFO: EXIT DONE 00:36:10.481 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:65:00.0 (144d a80a): Already using the nvme driver 00:36:10.481 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:10.481 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:10.772 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:10.772 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:10.772 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:10.772 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:10.772 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:14.091 Cleaning 00:36:14.091 Removing: /var/run/dpdk/spdk0/config 00:36:14.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:14.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:14.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:14.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:14.091 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:14.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:14.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:14.353 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:14.353 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:14.353 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:14.353 Removing: /var/run/dpdk/spdk1/config 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:14.353 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:14.353 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:14.353 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:14.353 Removing: /var/run/dpdk/spdk2/config 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:14.353 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:14.353 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:14.353 Removing: /var/run/dpdk/spdk3/config 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:14.353 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:14.353 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:14.353 Removing: /var/run/dpdk/spdk4/config 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:14.353 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:14.353 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:14.353 Removing: /dev/shm/bdev_svc_trace.1 00:36:14.353 Removing: /dev/shm/nvmf_trace.0 00:36:14.353 Removing: /dev/shm/spdk_tgt_trace.pid746660 00:36:14.353 Removing: /var/run/dpdk/spdk0 00:36:14.353 Removing: /var/run/dpdk/spdk1 00:36:14.353 Removing: /var/run/dpdk/spdk2 00:36:14.353 Removing: /var/run/dpdk/spdk3 00:36:14.353 Removing: /var/run/dpdk/spdk4 00:36:14.353 Removing: /var/run/dpdk/spdk_pid1005063 00:36:14.614 Removing: /var/run/dpdk/spdk_pid1006947 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1009185 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1009516 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1009572 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1009887 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1010614 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1012917 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1014310 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1014730 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1021448 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1027836 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1033899 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1078902 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1083765 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1091017 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1092530 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1094068 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1099169 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1104103 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1113578 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1113580 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1118646 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1118853 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1118996 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1119664 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1119672 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1121048 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1123069 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1125029 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1126959 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1128929 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1130916 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1138272 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1139105 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1140205 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1141500 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1147711 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1150814 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1157940 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1164664 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1171615 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1172401 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1173186 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1173879 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1174838 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1175603 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1176329 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1177025 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1182096 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1182331 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1189439 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1189560 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1192348 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1199545 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1199562 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1205578 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1208268 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1210795 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1212000 00:36:14.615 Removing: /var/run/dpdk/spdk_pid1214549 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1215925 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1225919 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1226484 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1227155 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1230133 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1230807 00:36:14.876 Removing: /var/run/dpdk/spdk_pid1231273 00:36:14.876 Removing: /var/run/dpdk/spdk_pid745110 00:36:14.876 Removing: /var/run/dpdk/spdk_pid746660 00:36:14.876 Removing: /var/run/dpdk/spdk_pid747214 00:36:14.876 Removing: /var/run/dpdk/spdk_pid748348 00:36:14.876 Removing: /var/run/dpdk/spdk_pid748907 00:36:14.876 Removing: /var/run/dpdk/spdk_pid749197 00:36:14.876 Removing: /var/run/dpdk/spdk_pid749588 00:36:14.876 Removing: /var/run/dpdk/spdk_pid749989 00:36:14.876 Removing: /var/run/dpdk/spdk_pid750374 00:36:14.876 Removing: /var/run/dpdk/spdk_pid750520 00:36:14.876 Removing: /var/run/dpdk/spdk_pid750770 00:36:14.876 Removing: /var/run/dpdk/spdk_pid751148 00:36:14.876 Removing: /var/run/dpdk/spdk_pid752498 00:36:14.876 Removing: /var/run/dpdk/spdk_pid755835 00:36:14.876 Removing: /var/run/dpdk/spdk_pid756202 00:36:14.876 Removing: /var/run/dpdk/spdk_pid756569 00:36:14.876 Removing: /var/run/dpdk/spdk_pid756614 00:36:14.876 Removing: /var/run/dpdk/spdk_pid757205 00:36:14.876 Removing: /var/run/dpdk/spdk_pid757297 00:36:14.876 Removing: /var/run/dpdk/spdk_pid757673 00:36:14.876 Removing: /var/run/dpdk/spdk_pid758009 00:36:14.876 Removing: /var/run/dpdk/spdk_pid758321 00:36:14.876 Removing: /var/run/dpdk/spdk_pid758391 00:36:14.876 Removing: /var/run/dpdk/spdk_pid758725 00:36:14.876 Removing: /var/run/dpdk/spdk_pid758765 00:36:14.876 Removing: /var/run/dpdk/spdk_pid759201 00:36:14.876 Removing: /var/run/dpdk/spdk_pid759551 00:36:14.876 Removing: /var/run/dpdk/spdk_pid759942 00:36:14.876 Removing: /var/run/dpdk/spdk_pid760163 00:36:14.876 Removing: /var/run/dpdk/spdk_pid760337 00:36:14.876 Removing: /var/run/dpdk/spdk_pid760394 00:36:14.876 Removing: /var/run/dpdk/spdk_pid760714 00:36:14.876 Removing: /var/run/dpdk/spdk_pid760857 00:36:14.876 Removing: /var/run/dpdk/spdk_pid761101 00:36:14.876 Removing: /var/run/dpdk/spdk_pid761452 00:36:14.876 Removing: /var/run/dpdk/spdk_pid761748 00:36:14.876 Removing: /var/run/dpdk/spdk_pid761878 00:36:14.876 Removing: /var/run/dpdk/spdk_pid762159 00:36:14.876 Removing: /var/run/dpdk/spdk_pid762511 00:36:14.876 Removing: /var/run/dpdk/spdk_pid762781 00:36:14.876 Removing: /var/run/dpdk/spdk_pid762928 00:36:14.876 Removing: /var/run/dpdk/spdk_pid763216 00:36:14.876 Removing: /var/run/dpdk/spdk_pid763573 00:36:14.876 Removing: /var/run/dpdk/spdk_pid763838 00:36:14.876 Removing: /var/run/dpdk/spdk_pid763980 00:36:14.876 Removing: /var/run/dpdk/spdk_pid764276 00:36:14.876 Removing: /var/run/dpdk/spdk_pid764627 00:36:14.876 Removing: /var/run/dpdk/spdk_pid764875 00:36:14.876 Removing: /var/run/dpdk/spdk_pid765020 00:36:14.876 Removing: /var/run/dpdk/spdk_pid765332 00:36:14.876 Removing: /var/run/dpdk/spdk_pid765683 00:36:14.876 Removing: /var/run/dpdk/spdk_pid765945 00:36:14.876 Removing: /var/run/dpdk/spdk_pid766175 00:36:14.876 Removing: /var/run/dpdk/spdk_pid766498 00:36:14.876 Removing: /var/run/dpdk/spdk_pid766848 00:36:14.876 Removing: /var/run/dpdk/spdk_pid767047 00:36:14.876 Removing: /var/run/dpdk/spdk_pid767229 00:36:14.876 Removing: /var/run/dpdk/spdk_pid767564 00:36:14.876 Removing: /var/run/dpdk/spdk_pid768223 00:36:14.876 Removing: /var/run/dpdk/spdk_pid768566 00:36:15.137 Removing: /var/run/dpdk/spdk_pid768754 00:36:15.137 Removing: /var/run/dpdk/spdk_pid769077 00:36:15.137 Removing: /var/run/dpdk/spdk_pid769431 00:36:15.137 Removing: /var/run/dpdk/spdk_pid769663 00:36:15.137 Removing: /var/run/dpdk/spdk_pid769836 00:36:15.137 Removing: /var/run/dpdk/spdk_pid770148 00:36:15.137 Removing: /var/run/dpdk/spdk_pid770500 00:36:15.137 Removing: /var/run/dpdk/spdk_pid770745 00:36:15.137 Removing: /var/run/dpdk/spdk_pid770909 00:36:15.137 Removing: /var/run/dpdk/spdk_pid771216 00:36:15.137 Removing: /var/run/dpdk/spdk_pid771562 00:36:15.137 Removing: /var/run/dpdk/spdk_pid771631 00:36:15.137 Removing: /var/run/dpdk/spdk_pid772037 00:36:15.137 Removing: /var/run/dpdk/spdk_pid776500 00:36:15.137 Removing: /var/run/dpdk/spdk_pid875289 00:36:15.137 Removing: /var/run/dpdk/spdk_pid880422 00:36:15.137 Removing: /var/run/dpdk/spdk_pid892270 00:36:15.137 Removing: /var/run/dpdk/spdk_pid898681 00:36:15.137 Removing: /var/run/dpdk/spdk_pid903725 00:36:15.138 Removing: /var/run/dpdk/spdk_pid904414 00:36:15.138 Removing: /var/run/dpdk/spdk_pid911734 00:36:15.138 Removing: /var/run/dpdk/spdk_pid911737 00:36:15.138 Removing: /var/run/dpdk/spdk_pid913216 00:36:15.138 Removing: /var/run/dpdk/spdk_pid914234 00:36:15.138 Removing: /var/run/dpdk/spdk_pid915248 00:36:15.138 Removing: /var/run/dpdk/spdk_pid915933 00:36:15.138 Removing: /var/run/dpdk/spdk_pid915936 00:36:15.138 Removing: /var/run/dpdk/spdk_pid916271 00:36:15.138 Removing: /var/run/dpdk/spdk_pid916285 00:36:15.138 Removing: /var/run/dpdk/spdk_pid916306 00:36:15.138 Removing: /var/run/dpdk/spdk_pid917356 00:36:15.138 Removing: /var/run/dpdk/spdk_pid918379 00:36:15.138 Removing: /var/run/dpdk/spdk_pid919494 00:36:15.138 Removing: /var/run/dpdk/spdk_pid920107 00:36:15.138 Removing: /var/run/dpdk/spdk_pid920235 00:36:15.138 Removing: /var/run/dpdk/spdk_pid920492 00:36:15.138 Removing: /var/run/dpdk/spdk_pid921803 00:36:15.138 Removing: /var/run/dpdk/spdk_pid923208 00:36:15.138 Removing: /var/run/dpdk/spdk_pid932978 00:36:15.138 Removing: /var/run/dpdk/spdk_pid933375 00:36:15.138 Removing: /var/run/dpdk/spdk_pid938407 00:36:15.138 Removing: /var/run/dpdk/spdk_pid945177 00:36:15.138 Removing: /var/run/dpdk/spdk_pid948277 00:36:15.138 Removing: /var/run/dpdk/spdk_pid960987 00:36:15.138 Removing: /var/run/dpdk/spdk_pid971705 00:36:15.138 Removing: /var/run/dpdk/spdk_pid973855 00:36:15.138 Removing: /var/run/dpdk/spdk_pid974874 00:36:15.138 Removing: /var/run/dpdk/spdk_pid995168 00:36:15.138 Removing: /var/run/dpdk/spdk_pid999688 00:36:15.138 Clean 00:36:15.399 killing process with pid 688145 00:36:25.402 killing process with pid 688142 00:36:25.402 killing process with pid 688144 00:36:25.402 killing process with pid 688143 00:36:25.402 13:47:22 -- common/autotest_common.sh@1436 -- # return 0 00:36:25.402 13:47:22 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:25.402 13:47:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:25.402 13:47:22 -- common/autotest_common.sh@10 -- # set +x 00:36:25.402 13:47:22 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:25.402 13:47:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:25.402 13:47:22 -- common/autotest_common.sh@10 -- # set +x 00:36:25.402 13:47:22 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:25.402 13:47:22 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:25.402 13:47:22 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:25.402 13:47:22 -- spdk/autotest.sh@394 -- # hash lcov 00:36:25.402 13:47:22 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:25.402 13:47:22 -- spdk/autotest.sh@396 -- # hostname 00:36:25.402 13:47:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:25.402 geninfo: WARNING: invalid characters removed from testname! 00:36:47.400 13:47:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:49.949 13:47:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:51.865 13:47:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.779 13:47:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.222 13:47:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:56.608 13:47:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.995 13:47:55 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:57.995 13:47:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.995 13:47:55 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:57.995 13:47:55 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.995 13:47:55 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.995 13:47:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.995 13:47:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.995 13:47:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.995 13:47:55 -- paths/export.sh@5 -- $ export PATH 00:36:57.995 13:47:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.995 13:47:55 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:57.995 13:47:55 -- common/autobuild_common.sh@438 -- $ date +%s 00:36:57.995 13:47:55 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721994475.XXXXXX 00:36:57.995 13:47:55 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721994475.yGXf6U 00:36:57.995 13:47:55 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:36:57.995 13:47:55 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:36:57.995 13:47:55 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:57.995 13:47:55 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:57.995 13:47:55 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:57.995 13:47:55 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:57.995 13:47:55 -- common/autobuild_common.sh@454 -- $ get_config_params 00:36:57.995 13:47:55 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:57.995 13:47:55 -- common/autotest_common.sh@10 -- $ set +x 00:36:57.995 13:47:55 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:57.995 13:47:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:36:57.995 13:47:55 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:57.995 13:47:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:57.995 13:47:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:57.995 13:47:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:57.995 13:47:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:57.995 13:47:55 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:57.995 13:47:55 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:57.995 13:47:55 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:58.257 13:47:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:58.257 + [[ -n 633745 ]] 00:36:58.257 + sudo kill 633745 00:36:58.268 [Pipeline] } 00:36:58.286 [Pipeline] // stage 00:36:58.292 [Pipeline] } 00:36:58.310 [Pipeline] // timeout 00:36:58.315 [Pipeline] } 00:36:58.333 [Pipeline] // catchError 00:36:58.339 [Pipeline] } 00:36:58.357 [Pipeline] // wrap 00:36:58.364 [Pipeline] } 00:36:58.380 [Pipeline] // catchError 00:36:58.390 [Pipeline] stage 00:36:58.392 [Pipeline] { (Epilogue) 00:36:58.407 [Pipeline] catchError 00:36:58.410 [Pipeline] { 00:36:58.424 [Pipeline] echo 00:36:58.426 Cleanup processes 00:36:58.432 [Pipeline] sh 00:36:58.725 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.725 1247925 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.740 [Pipeline] sh 00:36:59.030 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:59.031 ++ grep -v 'sudo pgrep' 00:36:59.031 ++ awk '{print $1}' 00:36:59.031 + sudo kill -9 00:36:59.031 + true 00:36:59.044 [Pipeline] sh 00:36:59.332 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:11.582 [Pipeline] sh 00:37:11.870 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:11.870 Artifacts sizes are good 00:37:11.887 [Pipeline] archiveArtifacts 00:37:11.896 Archiving artifacts 00:37:12.169 [Pipeline] sh 00:37:12.460 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:12.477 [Pipeline] cleanWs 00:37:12.488 [WS-CLEANUP] Deleting project workspace... 00:37:12.488 [WS-CLEANUP] Deferred wipeout is used... 00:37:12.496 [WS-CLEANUP] done 00:37:12.498 [Pipeline] } 00:37:12.518 [Pipeline] // catchError 00:37:12.532 [Pipeline] sh 00:37:12.821 + logger -p user.info -t JENKINS-CI 00:37:12.832 [Pipeline] } 00:37:12.847 [Pipeline] // stage 00:37:12.853 [Pipeline] } 00:37:12.873 [Pipeline] // node 00:37:12.880 [Pipeline] End of Pipeline 00:37:12.910 Finished: SUCCESS